You are currently browsing the category archive for the ‘Science’ category.

A few thoughts on struggle in learning. I’ll confess to having taught undergraduates in the classroom and the research lab environment. My classroom teaching bona fides are limited to 6 years of college level chemistry lecture/lab and quite a bit of one-on-one chemistry tutoring.

Many students approach college chemistry courses with caution. For some, a year of freshman general chemistry is mandatory for their major. Majors such as pre-med, physical therapy, and veterinary medicine require organic chemistry in addition to general chemistry. As my specialty lies in organic chemistry, I have experience teaching both general and organic chemistry students..

From my perspective, general chemistry is as much a mathematics course as it is a science course for many first-year students. A significant portion of general chemistry involves establishing and solving problems that necessitate fundamental algebraic manipulations and calculations. Skills such as balancing equations, maintaining units throughout calculations, and understanding significant figures are essential to master. Additionally, there is the challenge of learning the new vocabulary.

Students who managed to avoid chemistry in high school sometimes found themselves treading water in college chemistry and were afraid of taking two 5 credit hour hits to their GPAs. Most pushed on and got through it. General chemistry is a foundation course and is critical for further pursuits in fields related to the use of chemicals. Unfortunately, a year of gen chem doesn’t really make a person able to function as an independent chemist. It is helpful, though, for technicians in a lab doing routine chemical tests.

A common problem I encountered while teaching chemistry was the desire of some students to give up hope of ever “getting it”. They would hold off attending office hours to discuss their difficulties until it was too far down the semester timeline. This was usually after a few botched regular exams or a low midterm grade. Frequently the struggling student was having trouble with or neglecting the assigned homework from the text.

Now and then you’d run into a prof who had performance expectations that even they might not have met as an undergrad. They’ll strut around acting as though they were singlehandedly maintaining “proper” academic ideals. Who knows, maybe they had a point. You can try to enthuse everyone using words and pictures, but inevitably there are those who are utterly disinterested, inept or just anxious to put chemistry behind them.

In retrospect, I should have been more direct in calling in more students to office hours who were in grade trouble early in the term. Unfortunately, like many other profs I sometimes subscribed to the sink or swim approach to college education where unsuitable students are culled from the herd. It is a sort of Darwinistic mindset that is easy to fall into. In the end, we have to give all students a fair chance or even a second chance to earn the credentials that the institution confers.

Colleges are organizations that award credentials to verify achievement in meeting or exceeding educational standards set by in-house professors. It tells people that you completed what you started: you navigated a complex maze of intellectual achievements and came out the other side a success.

For any given subject there are always those who struggle with it to some extent. It could be from simple boredom, distractions from real life or the comprehending of difficult material. It may be that the subject just isn’t for them. For myself, I struggled with a foreign language and eventually gave up. I needed full immersion and that wasn’t going to happen. I still regret giving up.

One problem that can often be addressed, however, is the matter of struggle. It seems that many students are not accustomed to struggling with learning. All of us have learned particular subjects successfully because it “just fit” our cognitive abilities, interest or perhaps it was brilliantly presented to us. Or it was a special time in our lives when we were uniquely receptive. It could very well be that previous exposure to the subject was a bit shallow with grade inflation, leading to overestimation of their abilities.

Unfortunately for some, the very necessity of struggle convinces them that the subject is beyond their abilities. They come to believe that if the subject does not immediately stick or appear obvious, then they might as well give up because they will never “get it” along with a collapse of self-esteem.

Giving up on a subject early-on could allow them to switch directions in their education with less time lost and perhaps they would be relieved by that. In this case, giving up is just making a better choice based on experience. Regardless, students should be unburdened early on of the idea that struggle is a predictor of failure. In reality, most learning involves struggle at least to some extent.

Remedies for Struggle

Reading the assigned chapters several times is helpful. First pass, scan the content for a general idea of where the topic is going. A careful reading next with a focus on the example problems is very helpful. Try to understand the example problems and the reasoning presented. Next work on the problem set. If there is time, a third reading can help to cement in the concepts in the chapter. Before going on, work on the assigned problems. Open the solutions manual only if stuck. Struggle with the problem a bit. Success with solving assigned problems can be extremely helpful for a student.

If laboring alone isn’t helping, some schools have tutoring resources available. If not, there are often tutors who will charge on an hourly basis. A few hours of tutoring may be all it takes to get back on track. Sometimes there may be study partners from your class who can study with you. Then again, office hours with your prof or TA can help you over some rough spots. The point is- Struggle!

When I was writing exams, I would look at the example problems in the text as well as the assigned problems. I chose the problems to assign because I felt that they got to the heart of the concepts I held as important to the subject at the level of the content. I would use the assigned problems or those from lecture to write problems using different substances where a reaction would lead to an unambiguous answer. It’s ok to write some questions that require bit of logic to solve, but you can’t turn the exam into an intelligence test.

I once taught a course in chemistry for non-majors. These were students who had tried to get into Geology for Poets or Astronomy but couldn’t get in. They were trapped into taking chemistry for their science requirement for graduation! Early on, a few “representatives” of the class cornered me after a lecture and informed me that “everyone” expected true/false questions on the exams. Pausing, I said I would give them true/false questions, but they would get 1 point for a correct answer, 0 points for no answer, and -1 point for an incorrect answer. The lesson was that if you don’t know something it might be better to just be quiet. After a single exam they never mentioned true/false questions again.

Students eventually realize that chemistry is a highly vertical subject. The more advanced and interesting concepts are built upon or knitted together from those learned earlier. Later coursework will assume that the student has a grasp of content from earlier prerequisite courses. Thirty-one years later the 95 course evaluations from that Catholic women’s college still sit in an unopened envelope in my office.

Find a way to deal with anxiety. Exercise or find a councilor, psychologist, or psychiatrist for help. Anxiety is “druggable”, that is there are meds for it that are very effective. I’m sure there are exceptions, but a family practice doc can’t go very far down the road in treating anxiety. A psychiatrist can fine tune and mix the individual meds to best suit you. It really works.

Most importantly, the student should not EVER get behind in the coursework. It might even be better to drop the class than try to make up for much lost time. The normal rate of chemistry content flow to be absorbed is already high. To have to make up for time lost while also keeping up with the current content flow is often impossible.

Finally, consider that struggle just means that you have to put forth effort to learn. True learning means that your neurons are making new connections in your brain, not just images of something new. To have learned means that your brain has found a way to take diverse inputs and assemble them into part of your consciousness. Sometimes it isn’t easy, but persistence is the key.

As wondrous as our physical and chemical senses are, they are severely constrained in a few fundamental ways. Our vision is limited to our retinal response to a narrow, 1-octave wide band of electromagnetic radiation. As it happens, this band of light can be absorbed non-destructively by or stimulate change in the outer, valence level of inorganic and organic molecules. Electrons can be promoted to higher energy levels and in doing so temporarily store potential energy which can then do work on features at the molecular level. In the retina, this stimulates a polarization wave that propagates along the nervous system.

Owing to the constraints of the optics of the band of light we can sense, we cannot see atoms or molecules with the naked eye. This is because the wavelengths in the narrow range of visible light are larger than objects at the atomic scale. Instead, we perceive matter as a continuous mass of material with no indication of atomic scale structures. No void can be seen between the nucleus and the electrons. For the overwhelming majority of human history, we had no notion of atoms and molecules.

Democritus (ca 460-370 BCE) famously asserted that there exist only atoms and vacuum, everything else is opinion. The link provides more detail. The point is that atoms and vacuum were proposed more than 2000 years ago in Greece. The words of Democritus have survived over time but I’ll hazard a guess that the words were not influential in the rise of modern atomic theory in the 19th and 20th centuries. A good question for another day.

In all chemistry, energy is added to the valence level of a molecule as electronic, rotational, vibrational or translational energy.

Thumbnail Sketch of the Interaction of Light and Matter

Radio waves are a band of long wavelength that can interact with electrically conductive materials. Electromagnetic waves having a wavelength greater than 1 meter are considered to be radio waves. As a radio wave encounters a conductor, the oscillating electric field of the wave causes charge to oscillate in the conductor and at a rate matching the radio wave. Radio waves, whether in electronic devices or in space, are formed by the acceleration of charged particles. Recall that when you cause a charged particle to change it’s direction of motion, e.g., by a magnetic field, it is undergoing an acceleration. It is useful to know that radio waves are non-ionizing.

Microwave energy causes dipolar molecules to rotate back and forth by torsion as the waves pass. This rotational energy can be transferred to translational and vibrational energy through collisions, raising the temperature. The molecule does not need fully separated charges like a zwitterion, but molecules may have less than full charge on one side and a less than a full opposite charge on the other side, like water. This is a dipole. Water has a strong dipole and is susceptible to absorbing energy from microwaves.

Water molecule with dipole indicated.

Infrared radiation causes individual chemical bonds and entire frameworks to vibrate in specific ways. The Wikipedia link for this topic is quite good. When a molecule absorbs heat energy, it is partitioned into a variety of vibrational modes which can bleed off into other energy modes, raising the temperature.

Ultraviolet light is energetic enough to break chemical bonds into a pair of “radicals”- single valence electron species. These radicals are exceedingly reactive over their very short lifetime and may or may not collapse back into the original bond. Instead they can diffuse away and react with features that are not normally reactive, leading to the alteration of other molecules. UV light is very disruptive to biomolecules.

X-rays are more energetic than ultraviolet light and can cause destructive ionization of molecules along their path. They can dislodge inner electrons leaving an inner shell vacancy. An outer shell electron can collapse into the inner vacancy and release energy that can eject a valence level electron, called an Auger electron. This alters the atom by ionization and giving a change in reactivity. X-rays are also produced by the deceleration of electrons against a solid like copper though lighter targets can also produce x-rays.

Gamma radiation originates from atomic nuclei and their energy transitions. They are the highest energy form of electromagnetic radiation and cover a broad range of energies at <0.01 nanometer wavelengths. Many radioactive elements emit only gamma rays as a result of their nuclei being in an unstable state. Some nuclei can emit an alpha or beta particle resulting in an unstable nucleus that will then emit a gamma to relax.

The wavelengths of radio waves are too long and too weak to interact with biomolecules. Some radio waves come from the synchrotron effect where charged particles like electrons will corkscrew around magnetic field lines of a planet and release energy in the form of radio waves. In the case of Jupiter and it’s moon Io, a stream moving charged particles are accelerated by a magnetic field, the particles will emit mainly in the 10 to 40 MHz (decametric) range of radio waves as they spiral around the magnetic field lines into Jupiter. Jupiter’s volcanic moon Io sends charged particles into the planet’s polar regions where the magnetic field lines bunch up. This leaves a visible trace of borealis-like gas that glows. That radiation is emitted in the shape of a conical surface. It is only detectable here when the cone sweeps past earth as Io obits Jupiter.

Image from NASA. “This is a representation of the Jupiter-Io system and interaction. The blue cloud is the Io plasma torus, which is a region of higher concentration of ions and electrons located at Io’s orbit. This conceptual image shows the radio emission pattern from Jupiter. The multi-colored lines represent the magnetic field lines that link Io’s orbit with Jupiter’s atmosphere. The radio waves emerge from the source which is located at the line of force in the magnetic field and propagate along the walls of a hollow cone (grey area). Juno receives the signal only when Jupiter’s rotation sweeps that cone over the spacecraft, in the same way a lighthouse beacon shines briefly upon a ship at sea. Juno’s orbit is represented by the white line crossing the cone.”
NASA/GSFC/Jay Friedlander
Jupiter’s volcanic moon Io funnels charged particles into the planet’s polar regions where the magnetic field is strongest. This leaves a visible trace of borealis-like trails that glow. Source: NASA.

An atomic nucleus can absorb or emit gamma rays. For instance the gamma emitter Antimony-124 emits a 1.7 MeV gamma that can be absorbed by Beryllium-9 which photodisintegrates into a 24 kiloelectron volt neutron and two stable He-4 nuclei. This nuclear reaction can be used for surveying for beryllium ore deposits by detecting neutron backscatter.

Ok, done with that.

So, not all electromagnetic radiation plays nicely or at all with any given chemical substance. The narrow visible band of light is uniquely well suited to interact non-destructively, mostly, with living things. Chemistry is about the behavior of the outer, valence level of electrons around and between atoms and molecules.

The retinas in our eyes send signals to the brain continuously that result in a very curious thing- our perception of color registers instead of just a grey scale. Not just the colors of the rainbow, but also more nuanced perceptions like pastels, brown and in their many textures- all with binocular vision!

The constraints on human vision depend on the chemical composition and anatomical structures of the retina as well as the construction of the brain. As the description of the various bands of electromagnetic radiation suggest, there is much to the universe that our senses cannot detect. We do not directly view the radio, microwave, infrared, ultraviolet, x-ray or gamma ray views of the universe.

Our daily understanding of the universe is mostly framed by what we can see with the unique biochemistry and anatomy of the retina. It’s not a bad thing with its limitations, but for an appreciation of the true scope of the universe we would have to find ways to view in the other electromagnetic radiation bands. And, we do. With radio telescopes and satellites that pickup x-ray and UV energy to give images. Now with JWST, we’re peering deeper into the universe as revealed by infrared energy. The longer wavelengths of infrared can pass through clouds of dust particles that previously blocked our view in the optical spectrum.

The structures of the atom and molecules are characterized by the very large fraction of “empty” space they contain. Electrons seem to be point charges with no measurable size. Yet they have mass, spin and the same magnitude of charge but opposite that of the much heavier proton. And, the proton is not even a fundamental particle but a composite particle. It’s like a bag with three hard objects in it.

The universe is wildly different from what our senses present to us. All matter1 is made of mostly empty space. What we see as color doesn’t exist outside of our brains. Our sensation of smell is the same. Cold is not a thing. It is just the absence of heat energy. Finally, our consciousness exists only in our brains. It is a natural phenomenon that is highly confined, self-aware and may be imaged through its electrical activity or F-19 MRI with fluorinated tracers. This wondrous thing is happening on the pale blue dot floating in the vastness of empty space. So far, we can’t find anywhere else in the observable universe where this occurs.

It is good to remember that we search for extraterrestrial intelligence to a large extent with radio telescopes. On earth, the use of radio communication is a very recent thing, tracing back to the beginning of radio in 1886 in the laboratory of Professor Heinrich Rudolf Hertz at the University of Karlsruhe. Hertz would generate a spark and find that another spark would occur separately.

By 1894, Marconi was working on his scheme to produce wireless transmissions over long distances. The wider development of radio transmissions/receiving is well documented, and the reader can find a rabbit hole into its history here.

In order for the discovery of radio transmission to occur, several other things must have been developed first. The discovery of electricity had to precede the development of devices to generate stable sources of electricity on demand and with sufficient power. Then there is the matter of DC vs AC. Some minimal awareness of Coulombs, voltage, current, electromagnetism, conductors and insulators, and wire manufacturing is necessary to build induction coils for spark generation.

James Clerk Maxwell had developed a series of equations before the discovery of wireless transmission by Hertz. Hertz was very familiar with the work of Maxwell from his PhD studies and post doc under Kirchhoff and Helmholtz. Hertz was well prepared in regard to the theory of electromagnetism and was asking the right questions that guided his experimental work.

Radio transmission came to be after a period of study and experimentation by people like Marconi, Tesla and many others who had curiosity, resources and drive to advance the technology. As the field of electronics grew, so did the field of radio transmission. It’s not enough to build a transmitter- a receiver was required as well. Transmitter power and receiver sensitivity were the pragmatics of the day.

This was how we did it on earth. It was facilitated by the combined use of our brains, limbs, opposable thumbs and grasping hands. Also, an interest in novelty and ingenuity during this period of the industrial revolution was popular. While people who lived 10,000 years ago could certainly have pulled it off as well as we did, the knowledge base necessary for even dreaming up the concepts was not present and wouldn’t be for thousands of years. The material science, mathematics, understanding of physics, and maybe even cultures that prized curiosity and invention were not yet in place.

In order for extraterrestrials reaching out to send radio signals that Earthlings could detect, they would have to develop enough technology to broadcast (and receive) powerful radio transmissions. If you consider every single mechanical and electrical component necessary for this, each will have had to result from a long line of previous developmental work. Materials of construction like electrical conductors could only arise from the previous development of mining, smelting and refining as a prelude to conductor fabrication to produce a way of moving electrical current around.

Radio transmission requires electrical power generation and at least some distribution. None of this could have been in place without the necessary materials of construction, mechanical and electrical components already in place. Most of the materials would have to have been mined and smelted previously. Electrical power generators need to be energized by something else to provide electricity. On earth we use coal or natural gas to produce steam that drives generator turbines to make electricity. Also, there is nuclear and hydroelectric power. ETs would face a similar problem for the generation of electrical power.

If you follow the timeline leading to every single component of an operating radio transmitter, you’ll see that it requires the application of other technologies and materials. It seems as though a radio transmission from extraterrestrial home planets need something like an industrial base to get started.

What if there were intelligent extraterrestrials who were not anatomically suited to constructing radio transmitters for their own Search for Extraterrestrial Intelligence or just for local use? Perhaps they are +very intelligent but not far along enough yet to have developed radio. Or, what if they were just disinterested in radio? What if they used radio for a short window in time and have been using something else not detectable from earth, like what we do with optical cable? The point is that we would never hear them by radio, yet they would be there.

Surely there is a non-zero probability of this happening. This dearth of signal may be so prevalent that we will conclude that we are alone in our local region of space. Perhaps funding will be cut and we’ll quit looking. We can take that finding to fuel our sadness of being alone in the cosmos. Or we could use it to appreciate just how unique life is and take better care of ourselves.

1. Not including dark matter, if it really exists. I remain skeptical.

This is a reprint of an October 25, 2010, piece that I wrote about illumination with flames. I did tweak the title a bit for the sake of accuracy. -Th’ Gaussling

Until the invention of the electric lamp, the illumination of living and working space was very much the result of sunlight or of combustion.  Since the development of fire making skills in prehistoric times, the combustion of plant matter, fossil fuels, or animal fat was the only source of lighting available to those who wanted to illuminate the dark spaces in their lives.

From ancient times people had to rely on flames to throw heat and an agreeable yellowish light over reasonable distances. A good deal of technology evolved here and there to optimally capture the heat of combustion to do useful work (stoves, furnaces, and boilers) from readily available fuels. 

Lighting technology also evolved to maximally produce illumination from flame.  High energy density fuels that offered a measure of convenience for lamp users evolved as well. Liquid fuels like vegetable oils, various nut oils, whale oil and kerosene could flow to the site of combustion and were in some measure controllable for variable output. The simple wick is just such a  “conveyance and metering device” for the control of a lamp flame. Liquid fuels flow along the length of a wick by capillary action to a combustion zone whose size was variable by simple manipulation of the exposed wick surface area.

The first reported claim of the destructive distillation of coal was in 1726 by Dr Stephen Hales in England. Hales records that a substantial quantity of “air” was obtained from the distillation of Newcastle coal. It is possible that condensable components were generated, but Hales did not make arrangements to collect them.  Sixty years earlier an account of a coal mine fire from flammable coal gases (firedamp) highlighted the dangerous association of coal with volatiles. So, flammable “air’ was associated with coal for some time.

By 1826 a few chemists and engineers were examining the use of combustable gases for illumination. The historical record reveals two types of flammable gas that were derived from coal- coal gas and water-gas. Both gases came from the heating of coal, but under different conditions. Coal gas was the result of high temperature treatment of coal in reducing conditions. It is a form of destructive distillation where available volatiles are released.  Depending on the temperature, there was the possibility of pyrolytic cracking of heavies to lights as well. 

Water-gas was the result of the contact of steam with red hot coal or coke. The water dissociates into H2 and CO. Water gas is a mixture of hydrogen and carbon monoxide, both of which are combustible. The formation of water-gas is reported to have been discovered by Felice Fontana in 1780. 

One of the properties of burning coal gas or water-gas was the notably meager output of light from the flame. Workers like Michael Faraday and others noted that these new coal derived gases provided feeble illumination, but if other carbonaceous materials could be entrained, then a brighter flame could result. It was during the course of investigations on illumination with carburized water-gas that Faraday discovered bicarburet of hydrogen, or benzene.

About this time, an engineer named Donovan also noted that if other carbonaceous materials were to be entrained into water-gas, then the light output was enhanced. So, in 1830, engineer Donovan installed a “carburetted” water-gas lighting system for a short run in Dublin.

Coal gas was first exploited for lighting by the Scottish engineer William Murdoch.  Murdoch began his experiments in 1792 while working for Watt and Boulton in England. By the late 1790’s, Murdoch was commercially producing coal gas lighting systems. His home was the first to be lit with coal gas.

The carburization of water gas eventually became an established industry in America in the second half of the 19th century. The treatment of gases, especially with the discovery of natural gas in Ohio, increased the commercial viability of lighting with gas. Carburization of water gas was aided by the discovery of hydrocarbon cracking to afford light components that could be used for this purpose.

Thorium is frequently found in the ores of rare earth elements (REE) and the connection of REE’s to the issue of illumination begins in the laboratories of Berzelius in about 1825. Berzelius had observed that when thoria and zirconia were heated in non-luminous flames, the metal oxides glowed intensely.  But this was not a new phenomenon. Substances like lime, magnesia, alumina, and zinc oxide were known to produce a similar effect. Goldsworthy Gurney had developed the mechanism of the Limelight a few years before. In the limelight, a hydrogen-oxygen flame played on a piece of lime (calcium oxide) to produce a brilliant white glow.  This effect was soon developed by Drummond to produce a working lamp for surveying.

The work of Berzelius was an important step in the development of enhanced flame illumination. He had extended the range of known incandescent oxides to include those that would eventually form the basis of the incandescent mantle industry.  Thoria (mp 3300 C) and zirconia (mp 2715 C) are refractory metal oxides that retain mechanical integrity at very high temperature. This is a key attribute for commercial feasibility.

Numerous forms of incandescent illumination enhancements were tried in the middle 19th century. Platinum wire had the property of glowing intensely in non-luminous flames. But platinum was not robust enough for extended use and was quite rare and consequently very expensive. By 1885, a PhD chemist named Carl Auer von Welsbach patented an incandescent mantle which was to take the gas light industry to a new level of performance. Welsbach studied under professor Robert Bunsen at the University of Heidelberg. 

Welsbach fashioned the incandescent mantle into the form that is familiar to anyone today who has used a Coleman lantern. The original mantle was comprised of a small cellulose nitrate bag that had been impregnated with magnesium oxide, lanthanum oxide, and yttrium oxide in the ratio of 60:20:20.  The mantle gave off a greenish light and was not very popular.

By 1890, Welsbach produced an improved incandescent mantle containing thoria and ceria in a ratio of 99:1. This mantle emitted a much whiter light and was very successful. Many combinations of zirconia, thoria, and REE metal oxides were tried owing to their refractory nature, but the combination of thoria-ceria at the ratio of 99:1 was enduring.

Welsbach made another contribution to the commercialization of REEs. Welsbach had experimented with mischmetal and was interested in its pyrophoric nature. He had determined that a mixture of mischmetal and iron, called ferrocerium, when struck or pulled across a rough surface, afforded sparks. In 1903 Welsbach patented what we now call the flint.  In 1907 he founded Treibacher Chemische Werke GesmbH. Today Treibacher is one of the leading REE suppliers in the world.

See the earlier post on REE’s.

REE’s in Greenland.

REE Bubble?

REE’s in Defense.

REE’s at Duke.

Here in Colorado, we were located north of the totality band in the partial annular eclipse region that swept across the US last week. I’ve seen annular eclipses previously so it was a been-there-done-that event for me. Below is a great photograph from NASA showing the eclipse from the DSCOVR (Deep Space Climate Observatory), a satellite jointly operated by USAF, NASA and NOAA. This satellite is in a non-repeating Lissajous orbit at the Lagrange point L1 about 1.6 million kilometers from Earth. It has also been called a looping halo orbit. At this location, it has a perpetual fully illuminated view of the Earth which rotates below it. The exception would be when the moon is in this part of its orbit.

The probe carries numerous sensors to allow measurements of the earth and space environments.

Source: NASA October 14, 2023 Annular eclipse. It is the dark spot on North America.

The band of totality stretched across the southwestern states October 14, 2023.

Source: NASA. Path of the annular eclipse totality.

Lagrange points arise from two large masses in gravitational proximity, in this case the sun and the Earth. Relative to the two large masses the 5 Lagrange points allow for stable “parking orbits” for small objects like a satellite. Objects are placed in orbit around the Lagrange points to remain roughly stationary in relation to the Earth-Sun system.

Source: NASA. Lagrange Points.
Source: Jordi Carlos, García García, Universitat Politecnica Catalunya, 2009. A three-dimensional view of the simulated Lissajous-type orbit of the Gaia probe about L2.

According to Wikipedia, a Lissajous orbit differs from a halo orbit in that it is quasi-periodic and dynamically unstable, needing occasional station-keeping actions by the probe. A halo orbit about a Lagrange point is described as a periodic, 3-dimensional orbit.

The history of the probe is a bit odd.  It was launched by SpaceX on a Falcon 9 v1.1 launch vehicle on 11 February 2015, from Cape Canaveral. DSCOVR, initially called Triana after Rodrigo de Triana, the first European explorer to see the Americas. The mission began as a proposal by Vice President Al Gore in 1998 as a whole earth observatory at the L1 point. The probe’s mission was put on hold by the Bush Administration in January 2001 and officially terminated by NASA in 2005. The probe was placed in nitrogen blanketed storage until it was again funded, then removed and tested for viability in November 2008. The Obama Administration funded it for refurbishment in 2009 and the mission was fully funded by 2012. The Air Force allocated funds in 2012 for its launch and awarded SpaceX the contract. On February 11, 2015, the probe was finally launched from Cape Canaveral, FL. Management of DSCOVR is provided by NASA’s Goddard Spaceflight Center.

The NISTAR instrument on board the DSCOVR probe was provided by the National Institute of Standards and Technology, NIST. NISTAR is a 4-band cavity radiometer and is located as shown below in orange. It measures reflected and emitted light in the infrared, visible and ultraviolet parts of the spectrum. The instrument is able to separate reflected light from Earth’s radiant emissions.

Source: Wikipedia. The DSCOVR probe.
Source: NASA, Steve Lorentz, Allan Smith, Yinan Yu, L1 Standards and Technology, Inc. Graph showing the parts of the spectrum where reflected and emitted radiation from Earth is to be found.

The Faraday Cup (FC) is a sensor that collects and quantifies the flux of positively charged particles in the solar wind, i.e., protons and helium nuclei. Variations in the solar wind speed are observed. In the course of operation they discovered that the solar wind is “colder” than was previously thought in terms of what is referred to as “thermal speed.” The researchers presented thermal speed numbers on the order of 300 to 500 km/sec.

Source: NASA. The faraday cup on board DSCOVR.
Source: NASA. The imaging camera- Earth Polychromatic Imaging Camera (EPIC). Sorry about the tiny print size.

Schematic of optical system of EPIC.

Source: Alexander Cede1,2,3*, Liang Kang Huang2,4, Gavin McCauley1, Jay Herman2,5, Karin Blank2, Matthew Kowalewski2, Alexander Marshak2, Front. Remote Sens., 09 July 2021, Sec. Satellite Missions, Volume 2 – 2021 | https://doi.org/10.3389/frsen.2021.702275. Copyright © 2021 Cede, Kang Huang, McCauley, Herman, Blank, Kowalewski and Marshak. The optics of the EPIC camera are that of a Cassegrainian style telescope.
  • 1SciGlob Instruments & Services LLC, Elkridge, MD, United States
  • 2Goddard Space Flight Center, NASA, Greenbelt, MD, United States
  • 3LuftBlick, Innsbruck, Austria
  • 4Science Systems and Applications, Inc., Lanham, MD, United States
  • 5Joint Center for Earth Systems Technology, Baltimore, MD, United States

The probe has a 420 kg dry mass and its solar panels provided an initial 600 watts at 28 volts. The probe attitude and translational motion is managed with a set of 4 reaction wheels and 10 hydrazine thrusters. The hydrazine, N2H4, monopropellant is decomposed over a bed of catalyst prior to ejection. This decomposition yields hot N2, H2 and NH3 gases.

Like many satellites, DSCOVR uses reaction wheels for attitude control. Of the 4 reaction wheels, 3 are for axis-control and the 4th is used as a spare. Each wheel is driven by an electric motor. When the angular velocity of a single reaction wheel changes, there is a proportional counter rotation, resulting in a change in attitude about that 1 axis. Since the wheel velocity can be precisely controlled by the electric motor, fine adjustments in attitude can be attained.

Note: Below is a quick safety brain-dump from a career in academic chemistry labs and chemical manufacturing facilities. It is not meant to be an unabridged guide to lab safety. Look elsewhere for that. it is easy to overlook Safety Data Sheets that come with chemical purchases.

At some time in their chemistry education the student should have had a good look at the chemical Safety Data Sheet or SDS for the chemicals and solvents they are using. While not necessarily very informative in terms of reaction chemistry, these documents are taken very seriously by many groups who can/will have an impact on your chemistry career and safety. Regardless of your walking-around-knowledge about a chemical substance, you should understand that the people who respond to emergency calls for a chemical incident will place a high reliance on what is disclosed on an SDS. A student who is connected with an incident won’t be the first point of contact when the fire department or ambulance arrives and wants information. In fact, it is highly unlikely that a student will ever have direct contact with a responder unless it is with an EMT.

Know where the SDS folder is. It may be in print or online.

When emergency responders arrive at the scene of your chemical incident, they will have protocols built into a strict chain of command. All information will pass through the responder’s single point of contact. The fire fighter with the fire hose is not the person you should try to communicate with. Information regarding the incident must be communicated up the chain of command from your site incident commander. The person responsible for the lab should know who that is. The staff at the incident site (your college) will also have protocols built onto a chain of command. Again, “ideally” the incident commander at the incident site will ask for information from others on the site regarding details on the event including the headcount (!) and communicate it to the incident commander of the responders. This is done to avoid confusing the responders with contradictory or useless information. Do not flood the responders with extraneous information. Don’t speak in jargon. If there are important points like “it’s a potassium fire”, pass it along. If there are special hazards like compressed hydrogen cylinders present, they’d like to know that too. Answer their questions then step back and let them do their job.

When responders arrive at the scene of a chemical incident, the first question they will ask is if everyone is accounted for. If everyone is accounted for, they will not risk their lives in the emergency response. However, if there are people unaccounted for or known to be trapped in a dangerous place or incapacitated, the responders will take greater chances with their own safety to rescue the victims. They will act to minimize property damage only if it can be done without risk to life and limb. Nobody wants to die saving property.

College chemistry departments that I have been involved with have had a flat policy of evacuating everyone from the building and congregating them at a defined location in response to an alarm. That way there is at least some reasonable chance that an accurate head count can be made. If technical advice is needed, faculty connected with the incident site should be consulted. The college will have an Environmental Health and Safety (EH&S) group or person who presumably will take charge of the incident on the incident side. The leader of EH&S should be informed of any hazards unique to the substance of concern if there is no SDS. Let them communicate with the responders. Generally, we chemists help most when we keep out of the way.

College chemistry departments are famous for housing one-of-a-kind chemical substances in poorly labeled bottles in faculty labs. These substances almost never have any kind of safety information other than perhaps cautionary advice like “don’t get it in your eye.” Luckily, university research typically uses small quantities of most substances except perhaps for solvents. Solvents can easily be present at multiples of 20 liters. These large cans are properly kept in a flammables cabinet. While research quantities may not represent a large fire hazard initially, there could easily be enough to poison someone. When you get to the hospital, the ER folks will have to figure out what to do with your sorry ass lying there poisoned by your own one-of-a-kind hazardous material.

In principle, the professor in charge of a chemistry research lab should be responsible for keeping an inventory of all chemicals including research substances sitting on the shelf. Purchased chemicals always have an SDS shipped with them. These documents should be filed in a well-known location and available to EH&S and responders.

The chemistry stockroom is a special location. Chemicals are commonly present at what an academic might call “bulk” scale, namely 100 to 1000 grams for solids and numerous 20 L solvent cans. The number of kg of combustibles and flammables per square meter of floor space is higher here. The stockroom manager should have a collection of SDS documents on file available to responders.

Right or wrong, people positively correlate the degree of hazard to the nastiness of an odor. Emergency responders are no different. This is another reason why it is critical for them to have an SDS. People need to adjust their risk exposure to the hazard present as defined by an SDS. We all know that some substances that are bad actors actually have an odor that is not unpleasant for a short time, like phosgene. Regardless of this imperfect correlation, if you can smell it, you are getting it in you and this is to be avoided. Inhalation is an important route of exposure.

In grad school we had an incident where a grad student dropped a bottle in a stairwell (!) with a few grams of a transition group metal complex having a cyclooctadiene (COD) ligand on it. Enough COD was released into the stairwell to badly stink it up. They didn’t know if it was an actual chemical hazard or not, so they pulled the fire alarm handle. The Hazardous Material wagon showed up right next to 50-60 chemistry professors, postdocs, and grad students. The responders were told what happened and with what, so they dutifully tried to find information on the hazards in their many manuals. They did not find anything.

They had 50-60 chemists within spitting distance but didn’t ask us any questions. This is because they are trained to respond as they did. This was a one-off research sample of a few grams but it had an obnoxious smell with unknown hazards. Finally they sent in some guys in SCBA gear and swept up the several grams of substance and set up a fan for ventilation. Don’t be surprised if the responders don’t have special tricks up their sleeves for your chemical event. They can’t anticipate every kind of chemical incident.

HazMat Team. Credit: https://en.wikipedia.org/wiki/Hazardous_materials_apparatus

Long story short, both the responders and the chemists didn’t have any special techniques tailor made for this substance. There was not evident pyrophoricity or gas generation. It was a dry sample so no flammable liquids to contend with. The responders used maximum PPE and practiced good chemical hygiene in the small clean up. Case closed.

An SDS is required for shippers as well. It shows them how to placard their vehicles according to the hazards. Emergency responders need to see the SDS in order to safely respond to an overturned 18-wheeler in the road or to a spill on a loading dock. It could also be that the captain of container ship wants to know precisely what kind of hazardous materials are visiting his/her ship.

Finally, an SDS should be written by a professional trained to do it properly. By properly I mean by someone who understands enough about regulatory toxicology, emergency response, relevant physicochemical properties, hazard and precautionary statements and shipping regulations to provide responders with enough information to respond to an incident. Here, incident means an unexpected release with possible exposure to people, a release into the environment or a fire or possible explosion.

In my world, the word “accident” isn’t used so much anymore. With the advent process hazard analysis (PHA) required by OSHA under Process Safety Management prior to the startup of a process, potential hazards and dangers are anticipated by a group of experienced experts and adjusted for. So, it is getting harder to have an unexpected event. “Accident” is being replaced with the word “incident.”

Toxicology is a specialty concerned with poisons. Regulatory toxicology refers to the field where measurements and models are used to define where a substances belongs in the many layers of applicable regulations. Toxicity is manifested in many ways with many consequences and each way is categorized into levels of severity. There is acute toxicity and there is chronic toxicity. Know the difference. That said, dose and exposure are two different things. Exposure relates to the presence of external toxicants, i.e., ppm in water or micrograms per cubic meter of air. Dose relates to the amount of toxicant entering the body based on the exposure time in the presence of a toxicant and the route of entry.

An SDS uses signal words like “Caution”, Warning”, or “Danger”. A particular standard test is needed to narrow down the type and magnitude of the toxicity. The figure below from the GHS shows the thresholds for categorization of Acute Toxicity.

Credit: Globally Harmonized System of Classification and Labeling of Chemicals.

Hazard and precautionary statements are important for an SDS. Rather than having everybody dreaming up their own hazard descriptions and precautions, this has been standardized into agreed upon language. Among other sources, Sigma-Aldrich has a handy list of Hazard Statements and Precautionary Statements available online.

Regulatory toxicology is very much a quantitative science enmeshed with a web of regulations. The EPA for instance does modeling of human health and environmental risks based on quantitative exposure or release inputs. Without toxicological and industrial hygiene testing data, they may fall back on model substances and default, worst case inputs to their models. In reality the certain hazard warnings you see on an SDS may or may not be based on actual measurement. The EPA can require that certain hazard statements be put on a given SDS based on their assessment of risk using models or actual data.

To be clear, hazard information reported on an SDS are considered gospel to emergency responders. Chemists of all stripes should be conversant with Safety Data Sheets and have a look at them the next time a chemical arrives. Your lab or facility should have a central location for SDS documents, paper or electronic.

In the handling and storage of chemicals, some thought should be given as to how a non-chemist would deal with a chemical spill. Is the container labeled with a CAS number or a proper name rather than just a structure? A proper name or CAS # could lead someone to an SDS. Is there an HMIS or other hazard warning label? There are many tens of thousands of substances that are either a clear, colorless or amber liquid or a colorless solid. If not for the sake of emergency responders then for the poor sods in EH&S who will likely have to dispose of the stuff when you are long gone. Storing chemicals, liquids especially, with some kind of secondary containment is always a plus. Keep the number of kilograms of combustibles and flammables in the lab to a minimum. A localized fire is better than a fire that quickly spreads to the clutter on the benchtop or the floor.

My, my, my. Rober F. Kennedy Jr. really screwed the pooch with his comments on ethnically targeted COVID-19. Reportedly, he said “there is an argument that (COVID-19) is ethnically targeted”, adding “Covid-19 is targeted to attack Caucasians and Black people. The people who are most immune are Ashkenazi Jews and Chinese …. we don’t know whether it’s deliberately targeted or not.” If this quote is correct, he did not actually say that COVID-19 was ethnically targeted, but rather that “there is an argument …”. It is much like saying “is Bob still beating his wife? I just don’t know …” Whether he endorses the targeting theory or not isn’t clear, but he was willing to trot out this provocative statement to make his point. There was much blowback. Given the racial undertones, it was a large blunder.

RFK Jr. is well known as an advocate for conspiracy theories, some of which are whoppers. The online news magazine Slate has an article that compiles them. I find that his portfolio of mania is exhausting. The thought of pushing back against such seems like a fool’s errand. It reminds of a line in the movie True Grit: “What have you done when you have bested a fool?” What is the point in debating him?

RFK Jr. is a Harvard grad and went the University of Virginia School of Law to get his JD degree. He had a few slip ups early in his career but recovered. He spent most of his career as an environmental lawyer and has fought many laudable battles for environmental justice. Somewhere along the line he went off the rails and landed in the crackpot ferry to conspiracy land. RFK Jr. is a penetrating anti-vaccine voice who can draw large crowds if for no other reason just to see him.

The substance of concern behind much of the anti-vaccine Sturm und Drang is Thimerosal. It is a synthetic organomercurial compound that is effective against bacteria and fungi. Its biocidal properties have been known since the around 1930. Mercurials have been used since the time of the Swiss alchemist Paracelsus (Philippus Aureolus Theophrastus Bombastus von Hohenheim) in the 1500’s. Paracelsus is known for the pronouncement that “only the dose makes the poison.” This remains a fundamental principle of toxicology.

The early mercurial medicaments used by Paracelsus were simple inorganic salts of mercury(II) like mercuric chloride, HgCl2, or mercury(I) like mercurous chloride, Hg2Cl2, also known as the mineral calomel. Mercuric chloride is prepared by treating liquid mercury with sulfuric acid followed by addition of sodium chloride for anion exchange. Mercurous chloride is prepared by heating mercuric chloride with mercury to do the reduction of Hg++ to Hg+.

Thimerosal is sometimes wrongly compared to methylmercury, a known and tragically toxic compound with the formula CH3Hg+X. The X anion can be chloride, hydroxide or a thiol, depending on the source. It is an easy comparison to make because of the similarity of methyl (CH3) to the ethyl (CH3CH2) hydrocarbon group in Thimerosal, but research has proven it to be a poor comparison. Methylmercury compounds can be produced by aquatic microorganisms in water bodies in the presence of inorganic mercury. The methylation of natural biomolecules is a well-known process.

Like many metals, mercury has an affinity for sulfur, occurring naturally as mercury (II) sulfide, HgS, as deposits of Cinnabar or as a minor constituent with other minerals. It also has an affinity for sulfur-containing amino acids such as methionine, cysteine and homocysteine found in proteins. In the bloodstream mercury binds with proteins like albumin to the extent of 95-99 %. While in the body and exposed to water it decomposes to thiosalicylate and ethylmercury. Ethylmercury cation (CH3CH2Hg+) disperses widely and can cross the blood-brain and placental barriers.

Cinnabar crystal, HgS. Source: Mindat.org

According to Doria, Farina, and Rocha (2015) in Applied Toxicology, a comparison of effects between methylmercury and ethylmercury gave essentially the same outcomes in vitro for cardiovascular, neural and immune cells. Under in vivo conditions, however, there was evidence of different toxicokinetic profiles. Ethylmercury showed a shorter half-life, compartmental distribution and elimination compared to methylmercury. Methylmercury and ethylmercury toxicity profiles show different exposure and toxicity risks.

For many years, Thimerosal was sold as an antiseptic under the name Merthiolate as a tincture (an ethanol solution) by Eli Lilly and Co. Like most households in the 1960’s, we had it in the medicine cabinet or its cousin Mercurochrome. They were used for topical application to burns, cuts and scratches. Thimerosal has been used as a preservative in many health-related preparations such as vaccines, eye drops and contact lens disinfecting solutions. While the CDC has cleared it of doing harm, anti-vaccine mania hit the fan well before COVID-19 and RFK Jr. put his credibility and name recognition behind it.

Thimerosal was first prepared by chemist Morris Kharasch at the University of Maryland in 1927. An interesting technical summary of the substance can be found on Drugbank Online.

Morris Selig Kharasch. Photo credit: National Academy of Sciences, 1960.

Kharasch is known for his pioneering work in free radical chemistry in the 1930’s at the University of Chicago but before that began his work with organomercury chemistry during the 1920’s while at the University of Maryland. His development of Thimerosal was a result of his organomercury work. He is also credited with opening the door to organic free radical chemistry leading to improvements in rubber polymer chemistry and manufacture. His work led to the use of peroxides to reliably induce the so-called anti-Markovnikov addition of a protic acid (HX) to olefins. The presence of trace peroxides was behind the unexpected “reverse” Markovnikov addition of seen in work with the addition of hydrogen bromide to bromopropene.

Kharasch’s early work in organomercury chemistry led to the invention (and patenting) of what became known as Merthiolate (thimerosal). Kharasch later worked as a successful consultant for Eli Lilly, the Du Pont Company, US Rubber, the US Army and others. In many cases these companies were the assignees of the patents.

Little mention is made of Morris Kharasch as a prolific and wide-ranging inventor with, by my count, 117 US patents with him as the inventor. So, why did Kharasch bother to patent Thimerosal? Did he anticipate its biocidal and preservative properties?

Kharasch references make mention of a 1931 patent regarding Thimerosal. That patent is STABILIZED BACTERICIDE AND PROCESS OF STABILIZING IT, US 1862896, appln. filed August 22, 1931, assignee: no party disclosed. The patent claims a process for and claims of water-soluble solution compositions. Numerous additives include antioxidants, alkyl amines, ethanolamine and borax. Claim 19 is telling. It claims the composition of sodium ethyl mercurithiosalicylate (Thimerosal), monoethanolamine, borax as a buffer and enough sodium chloride to make the composition sufficiently isotonic with the body fluids. In this patent the Thimerosal composition itself is not claimed, but as a component of a stabilized water solution. Claim 14 claims a water solution composition of sodium ethyl mercurithiosalicylate and an antioxidant which tends to “inhibit the acquisition” (odd choice of words) of burning properties by the solution. This plus the claim of an isotonic composition strongly suggests anticipated medicinal applications.

STABILIZED ORGANO-MERCUR-SULFUR COMPOUNDS, US 2012820, appln. Feb 17, 1934, assignee: Eli Lilly and Company. Claims a stabilized solution of alkyl mercuric sulfur compounds in water with aliphatic 1,2-diamines. Also claims Ethylenediamine ethylmercurithiosalicylate composition. This is similar to the ‘896 patent but specified ethylenediamines.

As mentioned above, the biocidal nature of inorganic mercurials had been known for a long time. There was actually limited success in the treatment of syphilis. But they were long known for being very harsh on the patient and grew out of favor when better treatments came along.

The antiseptic properties of Mercurochrome were discovered in 1918 at Johns Hopkins Hospital by urologist Hugh H. Young. Mercurochrome is essentially a dye molecule with an attached mercury warhead. There are three groups on the organic structure that aid in its solubility in water- NaO, CO2Na, and HgOH. Water solubility is often an important attribute in medicinal substances.

Source: Wikipedia.

Given that antiseptic properties of organomercurials were known, it is perhaps not surprising that an enterprising Ukrainian immigrant with an interest in organomercurials like Morris Kharasch might want to patent his invention.

Jupiter is quite old like the rest of the solar system. But even this far down the timeline, it is still a banded, multicolored gas giant. The same goes for Saturn. How is it that these planets are not some shade of brown or grey? The planet has an active atmosphere with complex circulation patterns. After a few billion years of atmospheric mixing, how is it that Jupiter still has a banded and bespotted atmosphere?

Ever wonder what substances are responsible for the colored features on Jupiter? Molecular hydrogen and helium make up the vast majority of atmospheric components but these gases are not colored in the visible spectrum. Other gases found in the atmosphere include the noble gases argon, krypton, and xenon; ammonia (NH3); methane (CH4); hydrogen sulfide (H2S); water (H2O); phosphine (PH3) are all colorless as well. Ammonium sulfide ((NH4)2S, CAS# 12135-76-1) and ammonium hydrosulfide (NH4SH, CAS# 12124-99-1) are thought to exist there. These last two could arise from a simple acid/base reaction between hydrogen sulfide and ammonia. A more comprehensive view can be had here. From the looks of it, Jupiter is a very stinky place.

The gaseous substances above are certainly colorless when free of suspended particles. Their respective pure condensates while colorless would be expected to produce whitish vapors or liquid/solid condensates. According to one source, ammonium hydrosulfide is a yellow fuming liquid with a boiling point of 51.6 oC at one atmosphere and forms white rhombic crystals under anhydrous conditions. Ammonium hydrosulfide is at equilibrium with its components ammonia and hydrogen sulfide.

Ammonium sulfide is a yellow crystalline solid that decomposes at ambient temperature (and presumably at 1 atmosphere on earth).

Organic compounds like methane, ethane, acetylene, and diacetylene found in trace amounts in the Jovian atmosphere could be activated by UV sunlight in the upper atmosphere into higher molecular weight unsaturated substances that could have visible chromophores present. This would be an ongoing process as circulation moves the substances around so there should be accumulation.

Credit: Webb Space Telescope; https://webbtelescope.org/contents/media/images/4182-Image

Given the optical opacity of the visible clouds on Jupiter, whatever colors are there must be due to suspended liquid aerosols and solid particulates. The colorful photo below, glorious though it may be, is an enhanced image in the optical wavelengths and possibly suggests there may be a higher concentration of colored substances than really exist.

In fairness, with all imagery, be it chemical photography or digital photography, decisions have to be made about color balance, saturation and contrast. In both cases, be it dyes or silver halide or semiconductor chips, these photosensitive materials won’t be sensitive across the color spectrum in the same way that our eyes are. It is hard to say by just looking at the photos how much image enhancement has been done to them. In particular, how is the color balance established? Well, NASA has made the Juno raw images available to the public so a lot of image enhancement by various people has been done based on aesthetics without regard to visual accuracy.

NASA has a piece of software used for color correction at the link here.

Even more fundamental than the limitations of the sensor chip on board Juno is the matter of “what is color anyway?” In this universe, the color of the spectrum as humans perceive it exists only in the convoluted neural pathways of our brains. In reality, the visible color spectrum is comprised of a band of wavelengths of electromagnetic radiation (EMR) ranging from 380 to 700 nanometers. Every other range of EMR like gamma rays, x-rays, ultraviolet, infrared, microwave and longwave “radio” light could be thought of as having their own “color” spectrum, albeit invisible to our eyes.

A Bit O’Chemistry

Color is a sensation that comes to our consciousness as a result of (bio)chemical mechanisms. Chemistry is generally about what can happen with the outer valence level electrons that buzz around atoms and molecules. We Earthlings are composed of chemicals and because EMR (photons) can interact with substances in ways that depend on the wavelength of the EMR. Our light perception begins with the ability of our chemical building blocks to absorb a certain band of wavelengths. Light can do two things in an encounter with matter- it can undergo absorption/emission or scattering with matter.

Graphics courtesy of me.

Absorption of a photon of visible or ultraviolet light by an organic molecule happens because there is something that can be acted upon to absorb the energy. Absorption of a photon of visible light by a molecule is limited to its valence electrons. In particular, a valence electron can be stimulated to jump to a higher energy level orbital around the organic molecule. This can result in a chemical change in the receiving molecule.

Absorption of infrared light causes vibration in the structure of the molecule. X-rays can cause ejection of inner electrons. Gamma rays can be absorbed or scatter off the nucleus. Microwave photons induce rotational motion or torsion in a polar molecule. Cosmic radiation is often so energetic that molecules are indiscriminately broken at the chemical bond level into neutral or charged pieces, leaving an ion channel along the path of the particle. However, new molecules may form when the reactive fragments recombine. Cosmic ray collisions with atomic nuclei form narrow sprays or showers of nuclear particles as is what happens in earth’s atmosphere. This is called secondary cosmic radiation and is comprised of x-rays, protons, alpha particles, pions, muons, neutrons, neutrinos and electrons.

Note the carbon bonds above with two lines between carbon atoms. They are called “double bonds” and they can absorb visible and ultraviolet EMR. When several of them are alternating as in Retinal, they are capable of visible light absorption. Roughly speaking, the longer the chain the longer the wavelength that can be absorbed, not unlike an antenna. Absorption of a photon can cause one of the two bonds to break and allow the remaining carbon chain to rotate about the remaining single bond. In this case the cis form rotates into the trans form which is a bit more stable due to reduced strain energy. The double bond can reestablish in the trans form and lock into place.

In changing from cis to trans, the elemental composition has not changed but the shape and certain chemical and physical properties have. When the shape of a molecule is changed, the manner in which the molecule interacts by contact with other molecules changes, particularly with proteins. This triggers the chain of biochemical events that follow, leading to light perception in our consciousness.

In living systems, some biomolecules have features that lend them the ability to absorb photons, sometimes to a useful end and sometimes to a destructive end (i.e., as with UV light and x-rays). Here, a chemical change would be the rearrangement of an electron around the molecule or a change in molecular shape or both. Receptor molecules in the retina are a particularly good example of a useful result of light absorption.

The result of this change from cis to trans is ultimately communicated from the retina to the brain via depolarization waves moving along nerve fibers and releasing neurotransmitters across synaptic gaps. Importantly, the change that caused the polarization wave is not permanent.

The visible spectrum of light waves, a bit under 1 octave wide, just so happens to be the band of light that can interact with valence electrons absent the destructive excitation that UV and x-rays cause. Infrared light causes vibration of chemical bonds and microwaves cause rotation of polar molecules. Longer radio waves pass right through us.

Rather than go into the biochemistry of this I will invite the reader to surf the interwebs for more. When you examine the chemical mechanism of light perception, think about what it took to figure this out.

Back to Jupiter.

Well, something opaque and colored is swirling around Jupiter persistently- just what the heck is it? The above example of Retinal was of a carbon-based, organic substance. The way carbon-based molecules interact with light is somewhat different than inorganic complexes. Whereas organic molecules can have double bonds and lone electron pairs that can interact with EMR, inorganic substances are largely absent this bonding feature. Instead, absorption and excitation of valence electrons and the net charge of a metal ion are involved. Inorganic substances as a group have a very broad range of colors.

What is of interest here is why the atmosphere hasn’t mixed into a single color over cosmic time. By visual inspection of the Juno images, Jupiter’s atmosphere is covered with abundant turbulent flows in the atmosphere.

The answer must relate to the unseen vertical flows. A colorless gas that condenses into clouds transitions from colorless to opaque as it rises, cools and condenses just like on Earth. Jupiter is famous for its colored stripes and the persistent Great Red Spot. These stripes render visual certain flows around the planetary axis. Due to the spherical shape of the rotating planet and heating from the sun, there will be a temperature gradient with altitude, a gradient pole to equator and Coriolis effect. All of this with varying amounts of vertical mixing as well.

There must be the possibility of non-gaseous material being lofted into the atmosphere from some liquid or solid surface below into a stable but complex system of circulation patterns. The process would self-select the finer particulates that are small enough to remain suspended in the atmosphere. But this in itself does not explain the presence of the colored bands or swirls.

Perhaps the colored bands and swirls infer a solid or liquid surface below that is inhomogeneous, that is, there are localized enriched “deposits” of particular substances. These surface deposits may or may not be “locked” into the latitude by the prevailing winds according to the physical properties of the material.

The apparent longevity of the multicolored atmosphere could be because the striped, large-scale circulation features are of sufficient strength that their inertia carries them around the planetary axis and directs them away from latitudinal flow. This would not prevent vortex formation at the interface or even within the band.

Enough. This is where I get off the hamster wheel of wild scientific speculation.

A few details on the JunoCam can be found here.

The above image is spectacular but is not what the human eye would perceive. Below is a comparison of a simulated human eye view vs a processed image with increased color saturation and contrast.

Human eye view of Jupiter vs image enhanced view. Image processing enhances color saturation and contrast. Photo credit: https://www.nasa.gov/image-feature/jpl/nasa-s-juno-mission-reveals-jupiter-s-complex-colors/
Credit: NASA JPL, https://photojournal.jpl.nasa.gov/jpeg/PIA25017.jpg

Included just because it is pretty. Credit: NASA

Atomic hydrogen (the major isotope protium) is the simplest, lightest and most abundant neutral atom in the universe. Molecular hydrogen, H2, is the simplest neutral molecule in the universe. Seems very simple. Well, hold on. Turns out that molecular hydrogen has two distinct forms and it relates to the business of nuclear spin.

Quantum mechanics (QM) is a basket of wavy weirdness. It is a model of the universe at the atomic and nuclear levels that is wildly different from the larger scale Newtonian universe of colliding billiard balls we humans casually observe. The QM model of the microscopic universe dates back to the early 1900’s and has been endlessly supported by experimental data, and it continues to surprise to this day. One of the fundamental QM quantities is ‘spin.’

Fundamental particles like electrons and protons have something referred to as spin angular momentum. In the larger scale Newtonian universe spinning is something that we equate with an object that is rotating about an axis. Protons have a measurable diameter- it is a finite sized object with mass, charge and spin. Electrons have mass, charge and spin also. However, electrons do not have a measurable size. They appear to be a point charge. So, how does an electron with no measurable size actually spin? What is it that spins? A point of clarification: Quantum spin has nothing to do with a rotating internal mass. It is a quantized wave property expressed in units the same as classical angular momentum (N·m·sJ·s, or kg·m2·s−1). So, what the hell is quantum spin?

Spin angular momentum was inferred experimentally by the Stern-Gerlach experiment, which was first conducted in 1922. In this experiment, silver atoms were passed through a magnetic field gradient towards a photographic plate. Particles with no magnetic moment** would pass straight through unaffected. Particles with non-zero magnetic moment would be deflected by the magnetic field. In the experiment, the photographic plate revealed two distinct beams rather than a continuous distribution. The results indicate that the magnetic moment was quantized into two states. The magnetic moment at the time was thought to be due to the literal spinning of an electrically charged particle. They deduced that there were two spin configurations- i.e., they were quantized.

Schematic of the Stern-Gerlach experiment. Credit: https://www.youklab.org/teaching/mites_2010/mites2010_quantumSlides.pdf

If you want to go deeper down the QM rabbit hole, be my guest. We’ll go forward with the notion of spin up and spin down. You’ll see how it works.

Atomic Hydrogen- Things Get Sciency

First, let’s look at a neutral hydrogen atom made of a proton and an orbiting electron. Both particles have spin and each can be in one of two states relative to the other- parallel and antiparallel or simply spin up and spin down for the sake of illustration. The spin combinations are up-up and down-up as shown in the figure below. Think of the arrows as bar magnets, so up-up would be two magnets with the north poles in parallel and the down-up would be bar magnets with magnetic poles facing opposite directions, or antiparallel. The arrangement where the magnets are aligned with identical poles in the same direction is less energetically favorable than when they are antiparallel. Since it is energetically down-hill, the up-up will want to flip to down-up or antiparallel lower energy state. The energy difference is lost as radio frequency radiation in the microwave band.

A spin flip to lower energy level results in the emission of a 1420 MHz (21 cm wavelength) radio frequency emission. This can be detected by a radio telescope though with some difficulty due to poor signal to background noise. Credit: http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/h21.html

The spin transition energy is 9.411708152678(13)×10−25 Joules. Regions of space with more intense 21 cm radiation are thought to be regions of greater hydrogen atom abundance. These regions can be examined for redshifting to give clues about relative motion in space. The spiral structure of the Milky Way galaxy was discovered with 21 cm radio observations.

Molecular Hydrogen, H2

Molecular hydrogen consists of two hydrogen atoms that share a pair of electrons which provide the bonding force. The two electrons spend a finite amount of time between the protons canceling the repulsive force between them. It’s called a sigma bond. So far, so good. The bond is springy so the molecule can/does vibrate.

An unfortunate reality of chemistry– Like most topics, the more background you have on a chemistry principle, the more unifying and elegant it becomes. This means that sharing the beauty of the molecular world is a little more difficult that many would like. I regret this most sincerely. Most freshman chemistry involves balancing equations and PV=nRT math. Necessary but not always captivating. Freshman chemistry is much like the Hobbit in the Lord of the Rings trilogy. It’s a necessary prelude.

First, a Dive Down the QM Rabbit Hole

Ok. I couldn’t ignore the QM rabbit hole. The two electrons of an H-H bond must have opposite spins in order to form a covalent bond. An orbital represents a specific occupancy space for one or two electrons around an atom or molecule. They are places, not physical objects. The atomic orbital model is a mathematical construct based on spherical harmonics to define the shapes of space that electrons will occupy around the nucleus, depending on their energy and quantum numbers. The likelihood of finding an electron is wavelike within a region of space.

Two electrons can occupy one orbital if they have opposite spins. It’s referred to as spin pairing. (Note: I posted on the orbital stuff a few posts back.) This hard and fast rule of antiparallel spins occupying the same orbital is formalized by the Pauli Exclusion Principle. The Pauli Principle says specifically that “no two fermions with half-integral spins can occupy the same quantum state within the same quantum system“. Electrons are fermions and the upshot is that only 2 electrons of antiparallel spin can occupy a single orbital. If two or more orbitals of equal energy level are available, the electrons will occupy separate orbitals with the same spin. The manner of the filling of orbitals with electrons is covered by Hund’s Rule.

Finally, QM gives a number to an electron’s spin- the spin quantum number. According to the Pauli Exclusion Principle, two electrons in a single orbital must have different half-integral quantum spin numbers: +/- 1/2, or antiparallel- to occupy the same orbital space.

Credit: Wikipedia.

Because the two H-H electrons are spin paired, there is no net spin from them. However, the protons are a different matter. Their spins can be parallel (up-up or down-down) or anti-parallel (up-down). The anti-parallel spins cancel to give no net proton spin to the H-H. But, in the case of spin parallel, the H-H molecule definitely has net spin.

Spin Isomers of H-H. Credit: Wikipedia, https://en.wikipedia.org/wiki/Spin_isomers_of_hydrogen

The spin parallel H-H molecules are called orthohydrogen and spin antiparallel H-H molecules are called parahydrogen. They are referred to as spin isomers or allotropes and are each distinct substances. There can be interconversion from orthohydrogen to parahydrogen molecules. The transition does not emit radiation, but it is exothermic. The parahydrogen is more stable by 1.455 kiloJoules (kJ/mol) per mole. Heating hydrogen will bring the composition to a maximum of 25 % ortho to 75 % para. When hydrogen is liquified, there is a slow conversion of ortho to para. It is worth noting that the enthalpy of evaporation of normal hydrogen (1:3 ortho to para) is 0.904 kJ/mol which is smaller than the 1.091 kJ/mole for 1:3 ortho to para conversion enthalpy for “normal” hydrogen. The conversion of orthohydrogen to parahydrogen in liquid form is exothermic and can result in hydrogen boil-off, leading to hydrogen loss and possibly causing a hazardous pressure rise. Those who regularly handle liquid hydrogen must be aware of this phenomenon. Orthohydrogen can also be catalytically converted to parahydrogen by contact with certain substances like ferric oxide, chromic oxide as well as several materials.

** Magnetic moment (from Wikipedia): magnetic moment is the magnetic strength and orientation of a magnet or other object that produces a magnetic field.

A paper is out comparing the resources needed to send women vs men on a trip to Mars. The paper, appearing in Nature publication Scientific Reports is: Scott, J.P.R., Green, D.A., Weerts, G. et al. Effects of body size and countermeasure exercise on estimates of life support resources during all-female crewed exploration missionsSci Rep 13, 5950 (2023). https://doi.org/10.1038/s41598-023-31713-6.

The paper is worth a look, but I’ve cut and pasted the conclusions below-

When compared at the 50th percentile for stature for US females and males, these differences increased to − 11% to − 41% and translated to larger reductions in TEE, O2 and water requirements, and less CO2 and Hprod during 1080-day missions using CM exercise. Differences between female and male theoretical astronauts result from lower resting and exercising O2 requirements (based on available astronaut data) of female astronauts, who are lighter than male astronauts at equivalent statures and have lower relative VO2max values. These data, combined with the current move towards smaller diameter space habitat modules, point to a number of potential advantages of all-female crews during future human space exploration missions.

A female crew would require less energy and less weight in provisions than men just from the benefits of smaller scale metabolism alone. Looks like hurtling women to Mars is an all-around winning idea.

A few years ago I found myself wandering through the Denver Museum of Nature and Science where I happened upon a robotics exhibition. In terms of the museum arts and sciences it was well conceived and executed, complete with a topical gift shop in the exit. All of the displays were accessible to the public in terms of language or hands-on widgetry. At each hands-on exhibit there stood a determined 5 to 8 year old yanking the controls around in a frantic effort to steer the robotic device away from the wall of the test area while onlookers yawned, waiting their turn. A visitor might have concluded that the purpose of the robot was to become stuck against an obstacle- a task it performed well.

These kinds of future technology exhibits are always popular at the museum. The lead-up to the exhibit is given all of the ballyhoo that the museum could afford. The theme of the exhibit is supercharged with the promise of a brighter tomorrow through the use of snazzy technology. If automobiles can be tied in, so much the better.  It is a celebration of the triumph of technology for the everyman. The subtext was that only by the clever application of technology will we continue to improve our lives. These wonderful robots with their mechanical limbs and primate form would free humans from the dangers and tedium of the work-a-day world.

As I threaded my way through the exhibit I was struck by a sad realization. We’re celebrating the replacement of people with automation. The exhibit was a valentine to all of the entrepreneurs, engineers, investors and vendors who are trying their best to render obsolete much of the remaining workforce. This planned obsolescence has been going for many, many years.

Despite being against our own best interest, we patrons excitedly embrace these “futurama” style exhibitions, perhaps because secretly all of us believe that we will evade the job title of “obsolete”. Absent in the exhibit was a display on what the redundant workers would be doing with their involuntary free time. Fishing or golfing no doubt.

The top-level beneficiaries of robotics are the owners of the factories that make and use them. The driver is that robotics properly done may extend margin growth into the future. A way to overcome foreign competition is by reducing overhead, especially labor costs. Robotics and AI are economic bubbles in the same manner that computers and smart phones have been. The early adopters could enjoy a competitive advantage by the way they use their resources. Profits are unlikely to be channeled into hiring because, well, they’re profiting from the use of robotics. Once automation becomes normalized, there is no going back.

Insider business tip: Healthy companies match labor to the demand for product. More demand, more labor. Increased profits may go towards growth and acquisition, or it may go to the stockholders or to bonuses for management. But rarely if ever a price reduction to the public. If you are making a dandy profit and sales are strong, why hire or reduce prices?

The secondary level beneficiaries will be the consumer who will likely be oblivious to the fact that widget prices have not risen lately. Lower overhead does not automatically result in price savings for the end user. Extra margins will be absorbed by the manufacturer or seller. Just as likely, extra margins may be consumed by the manufacturer in wholesale price negotiations with retailers in the eternal battle for retail shelf space.

Many will offer that the history of man’s use of tools from the stone axe and wheel to AI driven automation is/was inevitable. The ascent of mankind is driven in part by our ability to use tools and develop a command of energy. It is difficult to think of a progressive industrial technology that did not result in the reduction of labor contribution to the overall cost of production. Nobody mourns the loss of the mule team and wagon, steam locomotives, or whale oil. We celebrate obsolescence and we take rapid progress for granted. Technological triumphalism is what we all celebrate.

But we should remind ourselves that there exists a substantial negative aspect of the story of technological progress. It is the very thing it enables: the reduction of labor hours per unit of production. The drive to raise profit margins is relentless, partly because the cost of doing business rises always rises and eats into margins.  Labor costs in particular are always front and center in the mind of business owners.

The situation today is different than when Henry Ford developed his form of mass production. Then there was a smaller population with a significantly larger fraction of people living on farms capable of growing their own food. Many common goods and services were in the hands of local business operators who produced locally and distributed locally. Restrictions on manufacturing and business operations were less onerous than today allowing for greater flexibility in methodology. It may be fair to say that mass production is now widespread and optimized to some degree as a whole. Early automation with just limit switches and relays has given way to microprocessor-controlled process machinery. What is happening presently is the introduction of artificial intelligence (AI). This is the natural progression of technology.

However, we can look a step or two ahead further and ask the question, when will an AI system take over the total management of a factory? When will an AI system have human subordinates? How tight of a leash would we allow an AI system to have on the management of people? The presence of slack in the organization no doubt makes many job descriptions tolerable. What if AI tightened all of the slack in business operations where every half second is accounted for? Would people consent to working for an AI? Companies like Amazon are getting close to this, but there is still human oversight. Extrapolating, it is easy to predict that one day, very quietly, human management will disappear at some level and in its place will be an AI system.

AI has to be taught. Will there be standards of behavior built-in governing how AI interacts with its human subordinates? Will everyone want their companies managed by an AI programmed to have a Jack Welch profile? My god, I hope not.

Another awful thought is the possibility of government and the military run by AI. Let that roll around in your mind for a bit.

There is a need to get back to basic principles here. What is our purpose in life? For most I think it is to love and be loved as well as to participate in some kind of rewarding activity. We all want to be useful and to leave behind some kind of legacy. There is no doubt that the replacement of human labor by AI-driven systems will continue to move forward, encroaching on all of our lives. Ultimately this is driven by a few people at the top who will reap the rewards to the greater concentration of wealth by a few trillionaires. Is concentrated control of limited resources a good thing? Is there any choice?

There is also a large fraction of the population that is not very progressive or forward looking at all. While they enjoy the devices and comforts of advanced technology, they neither understand or care about what is needed to develop a drug or design a new semiconductor chip. Behind our modern civilization is an educated and skilled workforce. However, the US is comprised of many people who are anti-intellectual by nature. This trait has been there all along and will into the future.

In some ways these people are disruptive to the progress and stability of the American experiment and, as of this writing, it isn’t at all clear how this will play out. The USA may well not be a stable enough environment in the future to sustain the continued, very expensive growth of technology. Technological advance requires highly educated workforce who can afford the training to get there. Just to stay even with what we already have, the pipeline of educated people needs to be full.

Forward looking people, the ones who want to sustain our advanced civilization, must step up and be counted or the thing will expire. For all of its problems, the US has nonetheless been a productive incubator of innovation and a great many positive aspects of advanced civilization in the form of a noisy, somewhat chaotic liberal democracy. The goose that laid the golden egg is still alive. Shouldn’t we keep it going?

Archives

Blog Stats

  • 571,340 hits

Archives

Blog Stats

  • 571,340 hits