You are currently browsing the category archive for the ‘Technology’ category.

First, the word is out. According to the EIA, the US was the world’s leading oil producer for the 6th straight year in 2023 producing 12.6 million barrels per day.

It is common for people to blame rising US gasoline and diesel prices only on restrictions in crude oil production and alleged government regulatory overreach. Indeed, pressure on the gas and oil supply side or even just the threat of it can lea to unstable retail gasoline and diesel prices. What is less appreciated is the role of petroleum refineries on prices. To be sure, there is always price speculation on both the wholesale and retail sides of gas and diesel pricing to consider no matter the throughput. Like everywhere else, sellers in the petroleum value chain seek to charge as much as they possibly can 24/7/365. Everyone is itching to charge more but are hindered by competition and risk.

Refineries are only one of several bottlenecks in the gasoline and diesel supply chain that can influence retail prices. In principle, more gas and oil can always be produced at the wellhead by increased exploration or increased imports. Even so, there are constraints on transporting crude to refineries. Pipelines have flow rate limitations and storage tank farms and ocean tanker fleets all have finite capacity. Another bottleneck today is access to both the Suez and Panama canals. Suez Canal traffic is threatened by Houthi missile strikes on commercial shipping in the Red Sea and the Panama Canal seems to be drying up. The result is increased shipping costs and delays for international transport which the consumer will have to bear.

What do refineries do?

Refineries are very special places. Within the refinery there is 24/7 continuous flow of large volumes of highly flammable liquids and gases that are subjected to extreme temperatures and pressures for distillation, cracking, alkylates, hydrogenations and reformates. The whole refinery is designed, built and operated to produce the fastest and highest output of the most valuable group of products- fuels. This group would include gasoline, diesel, aviation fuel, and heating oil.

Petrochemicals account for approximately 17 % or refinery output. These petrochemical streams account for pharmaceutical raw materials, polymer products, coatings and films, synthetic fibers, personal hygiene products, synthetic rubber, lubricating grease and oils, paint, cleaning products and more. Regardless of what we may think of plastics and other synthetic materials, the 17 % produced by refineries feeds a very large fraction of the global economy. If plastic bags went away overnight, the whole world would begin to search immediately for alternatives like wood, metal or cotton/wool/flax/hemp.

Occasionally technological challenges confront refineries. An early challenge was the production of high octane anti-knock gasoline. This was investigated thoroughly as early as the 1920’s as the demand for more powerful automotive and aircraft engines was rising. Luckily for the USA, UK, and Germany, the anti-knock problem was solved just prior to WWII. This breakthrough led to aircraft engines with substantially increased power per pound of engine weight.

Leaded Gas

The petroleum that goes into gasoline is naturally rich in a broad range of straight chain hydrocarbon molecules. Straight chain hydrocarbons were used in the early days of happy motoring, but the engine power remained low. While these straight chain hydrocarbons have valuable heat content for combustion, the problem with these molecules is that in a piston engine, they cannot withstand the pressures in the compression stroke that would give greater power. To get maximum power from a gasoline engine, it is desirable to have the piston move up and down as far as possible for maximum power delivery to the crankshaft. However, a long stroke length means greater compression and higher pressure near the top of the compression stroke. Straight chain hydrocarbons could not withstand the higher pressures coming from the compression stroke and would detonate prior to reaching top of the cycle. This effect results in knocking or destructive pre-detonation with power loss.

Tetraethyllead was invented in 1921 by Thomas Midgley, Jr, working at General Motors. After some deadly and dissatisfying work by DuPont, General Motors and Standard Oil Company of New Jersey started the Ethyl Gasoline Corporation in 1924, later called Ethyl Corporation, and began to produce and market tetraethyllead. Within months of startup, the new company was faced with cases of lead poisoning, hallucinations, insanity and fatalities.

The first commercially successful fuel treatment to prevent this pre-detonation was tetraethyllead, (C2H5)4Pb, produced by Ethyl. This is the lead in “leaded” gasoline. The use of (C2H5)4Pb began before WWII and just in time to allow high compression aircraft engines to be built for the war. It allowed for higher powered aircraft engines and higher speeds for the allies which were applied successfully to aerial warfare. The downside of (C2H5)4Pb was the lead pollution it caused. Tetraethyllead is comprised of two chemical features- lead and 4 tetrahedrally arranged ethyl hydrocarbon groups. The purpose of the 4 ethyl groups (C2H5) on (C2H5)4Pb was their ability to give hydrocarbon solubility to a lead atom. It was the lead that was the active feature of (C2H5)4Pb that brought the octane boosting property. At relatively low temperature the ethyl groups would cleave from the lead leaving behind a lead radical, Pb., which would quench the combustion process just enough to allow the compression cycle to complete and the spark plug to ignite the mixture as desired.

Data from Wikipedia.

While tetraethyllead was especially toxic to children, it was also quite hazardous to (C2H5)4Pb production workers. Its replacement was only a matter of time.

Data from Wikipedia.

Fuel additives were found that would reduce engine fouling by scavenging the lead as PbCl2 or PbBr2 which would follow the exhaust out of the cylinder. While this was an engineering success, it released volatile lead products into the atmosphere.

Data from Wikipedia.

Eventually it was found that branched hydrocarbons could effectively inhibit engine knock or pre-detonation and could replace (C2H5)4Pb … which it did. While lead additives have been banned for some time from automotive use, general aviation has been allowed to continue with leaded aviation gas (avgas) in light piston engine aircraft like 100 octane low lead (100LL). Only recently has leaded avgas become a matter of public concern.

A refinery not only engineers the production of fuel components, it must also formulate blends for their customers, the gas stations, to sell. The formulations will vary with the season and the location. Some gasolines have ethanol, other oxygenates like MTBE, octane boosters, detergents and more. One parameter is the volatility of the fuel. When injected into the cylinder, it must evaporate at some optimum rate for best fuel efficiency. This will depend on the vapor pressures of the components.

Back to Refineries

The production volumes of the individual fuel products will not match the contents of the crude oil input. Gasoline is the most valuable product, but more gasoline leaves the refinery than arrives in the crude. Any given grade of gasoline has many, many components and the bulk of them have somewhere around 8 carbon atoms in the hydrocarbon chain. Wouldn’t it be nice if longer hydrocarbon chains could be broken into smaller chains to be added into the gasoline mix? And guess what, that is done by a process called “cracking”. A piece of equipment called a “cat cracker” uses a solid ceramic catalyst through which hot hydrocarbon gases pass and get cut into smaller fragments.

But what about straight chain hydrocarbon molecules? Wouldn’t it be nice to “reform” them into better and higher octane automotive fuels? There is a process that uses a “reformer” to rearrange hydrocarbon fuels to give better performance. The products from this process are called reformates.

Reforming is a process that produces branched, higher-octane hydrocarbons for inclusion in gasoline product. Happily, it turns out that gasoline with branched hydrocarbons are able to resist pre-detonation and have come to replace tetraethyllead in automotive fuels entirely. Today we still refer to this lead free gasoline product as “unleaded”.

Octane and Cetane Ratings

Octane rating is a measure of resistance to pre-detonation and is determined quantitatively by a single-cylinder variable compression ratio test engine. Several octane rating systems are in use. RON, the Research Octane Number, is based on the comparison of a test fuel with a blend of standard hydrocarbons. The MON system, Motor Octane Number, covers a broader range of conditions than the RON method. It uses preheated fuel, variable ignition timing and higher engine rpm than RON.

Some gasoline is rated in the (R + M)/2 method which is the just average of the RON and MON values.

In both the RON and MON systems, the straight chain hydrocarbon standards are n-heptane which is given an octane rating of 0 and the branched hydrocarbon 2,2,4-trimethylpentane, or isooctane, which is given an octane rating of 100.

Tetraethyllead and branched hydrocarbons are octane boosters. Methyl tert-Butyl Ether (MTBE), ethyl tert-butyl ether, and aromatics like toluene are also used to boost octane values. Internal combustion engines are built to use a gasoline with a minimum octane rating for efficient operation. A rating of 85 or 87 are often the octane ratings of common “unleaded” gasoline. Higher compression ratio engines require higher octane fuel- premium grade -to avoid knocking.

For comparison, diesel has a RON rating of 15-25 octane so it is entirely unsuitable for gasoline engines. Diesel has its own system called the Cetane rating. The Cetane Number is an indicator of the combustion speed of the diesel and the compression needed for ignition. Diesel engines use compression for ignition unlike gasoline engines which use a spark. Cetane is n-hexadecane which is a 16-carbon straight chain with no branching. Cetane is given a Cetane Number (CN) of 100. Similar to the Octane rating, the branched 16-carbon hydrocarbon heptamethylnonane, or isocetane, is given a CN of 15. Included in the Cetane number.

Refineries must keep close tabs on seasonal demand for their various cetane and octane-rated products as well as the composition of the crude oil inputs which can vary. Each gasoline product stream has performance specifications for each grade. While gasoline is a refined product free from water, most sulfur and solid contaminants, it is not chemically pure. It is a product that contains a large variety of individual hydrocarbon components varying by chain length, branching, linear vs cyclic, saturated vs unsaturated members that together afford the desired properties.

Specific Energy Content

Absent ethanol, the combustion energy values of the various hydrocarbon grades are so similar as to be negligeable. The energy content of pure ethanol is about 33 % lower than gasoline. Any energy differences would be due to subtle differences in blending to achieve the desired octane rating or proprietary additives like detergents. A vehicle designed to run on 85 octane will not receive a significant boost in power with 95 octane unless it is designed to operate on higher octane fuel.

Source: Wikipedia

From the Table above and looking at the polypropylene (PP) and polyethylene (PE) entries then comparing to gasoline, we see that the specific energies are the same. The two polymers and gasoline are saturated, hydrocarbons so it is no wonder they have the same specific energies. Polystyrene is a bit lower in specific energy because the hydrogen content is lower, reducing the amount of exothermic H2O formation as it burns. The point is that by throwing away millions of tons of PP or PE every year, we are throwing away a whopping amount of potential fuel for combustion and electrical energy generation.

Petroleum based liquid fuels burn readily because of their high vapor pressure and low flash points. Polyolefins like PP and PE by contrast have virtually no vapor pressure at room temperature and consequently are difficult to ignite. In order to burn, polyolefins need to be thermally cracked to small volatile fragments in order to provide enough combustible vapor for sustained combustion. Plastic fires tend to have an awful smell and dark smoke because the flame does a poor job of energizing further decomposition to vapor.

Going from E10 to E85, the specific energy density drops considerably from 43.54 to 33.1 MegaJoules per kilogram (MJ/kg). Replacing a significant quantity of gasoline with the already partially oxidized ethanol lowers the potential energy. In the tan colored section, we can see the elements silicon to sodium. These elements are either very oxophilic or electropositive and release considerable heat when oxidizing. Some metals amount to a very compact source of readily oxidizable electrons.

Refinery Troubles

According to the US Energy Information Agency (EIA) US refinery output in the first quarter of 2024 has dropped overall by 11 % and has fallen as low as 81 % utilization. Decreasing inventories are causing rising retail prices. Still, average gasoline and diesel prices are currently below the same time period in 2023.

According to EIA, the US Gulf Coast has seen the largest 4-week average drop in refinery utilization at 14 % since January, 2024. This is attributed in part to the early start of maintenance shutdowns of Motiva Port Arthur and Marathon Galveston Bay refineries which account for 7 % of US capacity.

Galveston Marathon Refinery. Source: Google Images.
Motiva, Port Arthur, TX. Source: Google images.

Weather has factored-in this year as refinery production was halted in several locations in the US. A severe winter storm shut down the TotalEnergies’ 238,000 barrel-per-day refinery in Port Arthur, Texas.

TotalEnergies, Port Arthur, TX.

Oil production in North Dakota fell to half. Oil production was estimated to have fallen between 600,000 and 650,000 barrels per day.

Exxon Mobil Corp returned a fluidic catalytic cracker and a coker to normal operation at its 564,440 barrel per day refinery in Baytown, Texas.

ExxonMobil Corp, Baytown, TX. Source: Google Maps.

A Flint Hills Resources 343,000 barrel per day refinery in Corpus Christi, Texas, was significantly impacted by unseasonably cold weather including freezing rain.

Flint Hills Resources, Corpus Christi, TX. Source: Google Maps.
Flint Hills Resources East Plant, Corpus Christi, TX. Source: Google Maps.

The largest refinery in the Midwest, BP’s 435,000 barrel per day refinery in Whiting, Indiana, was taken off-line by a power outage and forced a 10 % drop in refinery utilization in the Midwest the first week in January. Normally the Midwest region produces as much gasoline and diesel as it consumes. This rich local supply leads to somewhat lower prices in the region.

BP’s Whiting, IN, refinery along the southern shore of Lake Michigan, between Gary and South Chicago.

There is an old saying that goes “necessity is the mother of invention.” Its meaning is obvious. It says that when you run into a problem, you can invent your way around it. Or at least try to. The other solution to a problem is simply to live with it.

I recall that during the Apollo project in the late 1960’s, many conservatives would complain about the cost of going to the moon. Social progressives likewise made a complaint that was directed at shifting those NASA funds to social programs here on earth. Technology progressives would retort that it is worth it because of all of the spin-offs that were appearing out of the effort. The reply to this was that if you wanted some shiny new widget, just invent it. You don’t have to go to the moon.

Presently I can look back at the two major research domains, academic and industrial, and make comparisons. In academia, a professor’s work product is split between research, teaching and service to the school. Research is commonly measured by the number of papers published, especially in the prestigious journals. In some institutions, patenting is also taken into account. As for teaching, there are student evaluations and performance reviews by the department chair or the dean. This includes past performance in committees. A motivation in the first few years is to get tenure. Academic research includes putting research results in the public literature for all to use.

So, what about the mother of invention? Generally, in chemistry an invention comes from some kind of investigative activity, curiosity or need. Sometimes you may want to invent around an active patent rather than go into a licensing agreement.

The US patent office allows only one invention per application. If you choose, you can lop off your other invention and file it separately as a divisional patent. You would do this because the patent examiner will have raised an objection to your original filing. Doing a divisional filing allows you to use content from the first, or parent, patent application and you get the filing date of the parent as well. Early filing dates are very important.

Sometimes patents are written very narrowly and leave “white space” or potential claims around them. This is not always desirable so the matter can be solved by the use of “picket fence patents.” You patent your core art as broadly as the patent office will allow, then you file for patents that cover related art that a competitor could conceivably patent that would allow them to compete against you. By raising the cost of entry into your market or narrowing the scope of new art, you can dissuade competitors from entry or at least make them pay a heavy price for it. Who knows, maybe they’ll decide to buy a license from you or even an entire patent. An argument against picket fence patenting is that patents can be very expensive.

Academic research has a high reliance on external funding. This requires that the funding organization recognizes the novelty and p[otential intellectual value of the research proposal. Industrial research has a high reliance on market potential of an invention. What is the breakeven time and sales potential of the invention? Will demand last long enough for the invention to provide a healthy return on investment?

Academics can and do patent their work on occasion, especially if the university pays for it. The thing I object to is that a great deal of research is paid for by the taxpayers. We pay for the research and then it gets patented and its use is restricted for 20 years. Maybe taxpayers (businesses) can enter into a licensing agreement, but maybe someone else has bought exclusive rights. Licenses can be somewhere between reasonable to absurdly restrictive, depending on the terms of the agreement. Many will want to add an extra fee based on the sales income of the product. This means that there will be an annual audit with pencil neck auditors poking around your business. It’s like having a ferret in your shorts. Avoid if at all possible.

But, many companies leverage their output through licensing agreements of technology they have no interest in developing.

Industrial research is quite different in terms of administration of the endeavor. Industrial chemists are supervised by an R&D director and use in-house technology and science and/or what they learned in college, but here the results are aimed at producing something for sale or improving the profit margin of a process. There is no desire to share information. Industrial research produces in-house expertise as well as, hopefully, patentable inventions. Industrial invention can be driven by competition in existing markets or by expansion into something entirely new. Often it is to provide continuous margin growth if market expansion is slow.

The argument can be made to keep everything as a trade secret. Publishing your art in the patent literature can help competitors have their own brainstorms about the subject, or some may even be tempted to infringe on your art that is carefully laid out in front of their eyes. Competitors may be cued into a new product’s capabilities and gives insight into new products.

Both academic and industrial chemists invent. The difference is that in industry some inventions or art are held in trade secrecy, even if they never get commercialized. Academic researchers can and do keep secrets when they are aiming for a patent, at least until the patent is granted. Compartmentalization in a research group is critical, since disputes about inventorship can kill a patent. Once issued, academics will publish as many papers about the patented art as possible. Commonly, patents are assigned to whoever pays for it- usually an organization. An academic patent is assigned to the inventor’s institution while in industry the company is the assignee. In both cases the inventor is usually awarded only a token of appreciation and the “satisfaction” of having a patent.

So, what about “necessity is the mother of invention”? There are some inventive projects that are too large or risky for a business or even a consortium of businesses to handle. I’m thinking of the Apollo Moon Landing program. The project required the resources of a government. A great deal of invention by many players allowed the moon landing to happen. The necessity for all of this invention was that the US government set a goal and farmed out thousands of contracts with vendors to make it happen. Much wealth was spread around into the coffers of industry, but with contracts having stringent specifications for man-rated spaceflight and tight timelines to be met.

That’s one of the values of having a government like we had in the 1960’s. They created the necessity and private industry made it happen. Despite the cultural upset of the 1960’s and the Viet Nam war, the Apollo Project worked. No astronauts died in space. This necessity/invention pressure does work.

A few years ago I found myself wandering through the Denver Museum of Nature and Science where I happened upon a robotics exhibition. In terms of the museum arts and sciences it was well conceived and executed, complete with a topical gift shop in the exit. All of the displays were accessible to the public in terms of language or hands-on widgetry. At each hands-on exhibit there stood a determined 5 to 8 year old yanking the controls around in a frantic effort to steer the robotic device away from the wall of the test area while onlookers yawned, waiting their turn. A visitor might have concluded that the purpose of the robot was to become stuck against an obstacle- a task it performed well.

These kinds of future technology exhibits are always popular at the museum. The lead-up to the exhibit is given all of the ballyhoo that the museum could afford. The theme of the exhibit is supercharged with the promise of a brighter tomorrow through the use of snazzy technology. If automobiles can be tied in, so much the better.  It is a celebration of the triumph of technology for the everyman. The subtext was that only by the clever application of technology will we continue to improve our lives. These wonderful robots with their mechanical limbs and primate form would free humans from the dangers and tedium of the work-a-day world.

As I threaded my way through the exhibit I was struck by a sad realization. We’re celebrating the replacement of people with automation. The exhibit was a valentine to all of the entrepreneurs, engineers, investors and vendors who are trying their best to render obsolete much of the remaining workforce. This planned obsolescence has been going for many, many years.

Despite being against our own best interest, we patrons excitedly embrace these “futurama” style exhibitions, perhaps because secretly all of us believe that we will evade the job title of “obsolete”. Absent in the exhibit was a display on what the redundant workers would be doing with their involuntary free time. Fishing or golfing no doubt.

The top-level beneficiaries of robotics are the owners of the factories that make and use them. The driver is that robotics properly done may extend margin growth into the future. A way to overcome foreign competition is by reducing overhead, especially labor costs. Robotics and AI are economic bubbles in the same manner that computers and smart phones have been. The early adopters could enjoy a competitive advantage by the way they use their resources. Profits are unlikely to be channeled into hiring because, well, they’re profiting from the use of robotics. Once automation becomes normalized, there is no going back.

Insider business tip: Healthy companies match labor to the demand for product. More demand, more labor. Increased profits may go towards growth and acquisition, or it may go to the stockholders or to bonuses for management. But rarely if ever a price reduction to the public. If you are making a dandy profit and sales are strong, why hire or reduce prices?

The secondary level beneficiaries will be the consumer who will likely be oblivious to the fact that widget prices have not risen lately. Lower overhead does not automatically result in price savings for the end user. Extra margins will be absorbed by the manufacturer or seller. Just as likely, extra margins may be consumed by the manufacturer in wholesale price negotiations with retailers in the eternal battle for retail shelf space.

Many will offer that the history of man’s use of tools from the stone axe and wheel to AI driven automation is/was inevitable. The ascent of mankind is driven in part by our ability to use tools and develop a command of energy. It is difficult to think of a progressive industrial technology that did not result in the reduction of labor contribution to the overall cost of production. Nobody mourns the loss of the mule team and wagon, steam locomotives, or whale oil. We celebrate obsolescence and we take rapid progress for granted. Technological triumphalism is what we all celebrate.

But we should remind ourselves that there exists a substantial negative aspect of the story of technological progress. It is the very thing it enables: the reduction of labor hours per unit of production. The drive to raise profit margins is relentless, partly because the cost of doing business rises always rises and eats into margins.  Labor costs in particular are always front and center in the mind of business owners.

The situation today is different than when Henry Ford developed his form of mass production. Then there was a smaller population with a significantly larger fraction of people living on farms capable of growing their own food. Many common goods and services were in the hands of local business operators who produced locally and distributed locally. Restrictions on manufacturing and business operations were less onerous than today allowing for greater flexibility in methodology. It may be fair to say that mass production is now widespread and optimized to some degree as a whole. Early automation with just limit switches and relays has given way to microprocessor-controlled process machinery. What is happening presently is the introduction of artificial intelligence (AI). This is the natural progression of technology.

However, we can look a step or two ahead further and ask the question, when will an AI system take over the total management of a factory? When will an AI system have human subordinates? How tight of a leash would we allow an AI system to have on the management of people? The presence of slack in the organization no doubt makes many job descriptions tolerable. What if AI tightened all of the slack in business operations where every half second is accounted for? Would people consent to working for an AI? Companies like Amazon are getting close to this, but there is still human oversight. Extrapolating, it is easy to predict that one day, very quietly, human management will disappear at some level and in its place will be an AI system.

AI has to be taught. Will there be standards of behavior built-in governing how AI interacts with its human subordinates? Will everyone want their companies managed by an AI programmed to have a Jack Welch profile? My god, I hope not.

Another awful thought is the possibility of government and the military run by AI. Let that roll around in your mind for a bit.

There is a need to get back to basic principles here. What is our purpose in life? For most I think it is to love and be loved as well as to participate in some kind of rewarding activity. We all want to be useful and to leave behind some kind of legacy. There is no doubt that the replacement of human labor by AI-driven systems will continue to move forward, encroaching on all of our lives. Ultimately this is driven by a few people at the top who will reap the rewards to the greater concentration of wealth by a few trillionaires. Is concentrated control of limited resources a good thing? Is there any choice?

There is also a large fraction of the population that is not very progressive or forward looking at all. While they enjoy the devices and comforts of advanced technology, they neither understand or care about what is needed to develop a drug or design a new semiconductor chip. Behind our modern civilization is an educated and skilled workforce. However, the US is comprised of many people who are anti-intellectual by nature. This trait has been there all along and will into the future.

In some ways these people are disruptive to the progress and stability of the American experiment and, as of this writing, it isn’t at all clear how this will play out. The USA may well not be a stable enough environment in the future to sustain the continued, very expensive growth of technology. Technological advance requires highly educated workforce who can afford the training to get there. Just to stay even with what we already have, the pipeline of educated people needs to be full.

Forward looking people, the ones who want to sustain our advanced civilization, must step up and be counted or the thing will expire. For all of its problems, the US has nonetheless been a productive incubator of innovation and a great many positive aspects of advanced civilization in the form of a noisy, somewhat chaotic liberal democracy. The goose that laid the golden egg is still alive. Shouldn’t we keep it going?

Chemical process scale-up is a product development activity where a chemical or physical transformation is transferred from the laboratory to another location where larger equipment is used to run the operation at a larger scale. That is, the chemistry advances to bigger pots and pans, commonly of metal construction and with non-scientists running the process. A common sequence of development for a fine chemical batch operation in a suitably equipped organization might go as follows: Lab, kilo lab, pilot plant, production scale. This is an idealized sequence that depends on the product and value.

Scale-up is where an optimized and validated chemical experimental procedure is taken out of the hands of R&D chemists and placed in the care of people who may adapt it to the specialized needs of large scale processing. There the scale-up folks may scale it up unchanged or more likely apply numerous tweaks to increase the space yield (kg product per liter of reaction mass), minimize the process time, minimize side products, and assure that the process will produce product on spec the first time with a maximum profit margin.

The path to full-scale processing depends on management policy as well. A highly risk-averse organization may make many runs at modest scale to assure quality and yield. Other organizations may allow the jump from lab bench to 50, 200, or more gallons, depending on safety and economic risk.

Process scale-up outside of the pharmaceutical industry is not a very standardized activity that is seamlessly transferable from one organization to another. Unit operations like heating, distillation, filtration, etc., are substantially the same everywhere. What differs is administration of this activity and the details of construction. Organizations have unique training programs, SOP’s, work instructions, and configurations of the physical plant. Even dead common equipment like a jacketed reactor will be plumbed into the plant and supplied with unique process controls, safety systems and heating/cooling capacity. A key element of scale-up is adjusting the process conditions to fit the constraints of the production equipment. Another element is to run just a few batches at full scale rather than many smaller scale reactions. Generally it costs only slightly more in manpower to run one large batch than a smaller batch, but will give a smaller cost per kilogram.

Every organization has a unique collection of equipment, utilities, product and process history, permits, market presence, and most critically, people. An organization is limited in a significant way by the abilities and experiences of the staff who can use the process equipment in a safe and profitable manner. Rest assured that every chemist, every R&D group, and every plant manager will have a bag of tricks they will turn to first to tackle a problem. Particular reagents, reaction parameters, solvents, or handling and analytical techniques will find favor for any group of workers. Some are fine examples of professional practice and are usually protected under trade secrecy. Other techniques may reveal themselves to be anecdotal and unfounded in reality. “It’s the way we’ve always done it” is a confounding attitude that may take firm hold of an organization. Be wary of anecdotal information. Define metrics and collect data.

Chemical plants perform particular chemical transformations or handle certain materials as the result of a business decision. A multi-purpose plant will have an equipment list that includes pots and pans of a variety of functions and sizes and be of general utility. The narrower the product list, the narrower the need for diverse equipment. A plant dedicated to just one or a few products will have a bare minimum of the most cost effective equipment for the process.

Scale-up is a challenging and very interesting activity that chemistry students rarely hear about in college. And there is little reason they should. While there is usually room in graduation requirements with the ACS standardized chemistry curriculum, industrial expertise among chemistry faculty is rare. A student’s academic years in chemistry are about the fundamentals of the 5 domains of the chemical sciences: Physical, inorganic, organic, analytical, and biochemistry. A chemistry degree is a credential stating that the holder is broadly educated in the field and is hopefully qualified to hold an entry level position in an organization. A business minor would be a good thing.

The business of running reactions at a larger scale puts the chemist in contact with the engineering profession and with the chemical supply chain universe. Scale-up activity involves the execution of reaction chemistry in larger scale equipment, greater energy inputs/outputs, and the application of engineering expertise. Working with chemical engineers is a fascinating experience. Pay close attention to them.

Who do you call if you want 5 kg or 5 metric tons of a starting material? Companies will have supply chain managers who will search for the chemicals with the specifications you define. Scale-up chemists may be involved in sourcing to some extent. Foremost, raw material specifications must be nailed down. Helpful would be some idea of the sensitivity of a process to impurities in the raw material. You can’t just wave your hand and specify 99.9 % purity. Wouldn’t that be nice. There is such a thing as excess purity and you’ll pay a premium for it. For the best price you have to determine what is the lowest purity that is tolerable. If it is only solvent residue, that may be simpler. But if there are side products or other contaminants you must decide whether or not they will be carried along in your process. Once you pick a supplier, you may be stuck with them for a very long time.

Finally, remember that the most important reaction in all of chemistry is the one where you turn chemicals into money. That is always the imperative.

On a recent vacation trip to the Puget Sound area I managed to take a public tour of the Boeing manufacturing facility in Everett, WA. They don’t give away the tour- it costs $25 for adults and lasts about 90 minutes. For cash you get a movie highlighting the history of Boeing and a trip to a few mezzanines overlooking the 787 Dreamliner and 747 manufacturing areas. And just like Disney, you exit the attraction tour through the gift shop.

The first thing you notice is that security is very stringent. No phones, bags or purses, etc., once the tour begins. They are an important military contractor after all. As technically savvy as they may be though, the communication level of the tour guide was roughly 6-7th grade. The reason might be the wide range of visitor ages and nationalities. One Asian visitor on our bus wore a blue track suit bearing the name “Mongolia”.

It is easy to forget just how brilliant the US is and has long been in the broader aerospace world. Of course, other countries have developed advanced aerospace platforms, and produced their share of talent too, notably France, England, Germany and Russia. But one must admit that considerable advancement has happened here for some reason. A broad industrial base with access to raw materials and capital is certainly a big part of it. Perhaps our remote location between two great oceans and historical absence of the distraction of carpet bombing by foreign adversaries has a little to do with it as well.

Balloon on a hazy day.

For many of us, aerospace brings out excitement and optimism by its very nature. It embodies much of the best in people. The pillars of aerospace are many and rely strongly on ingenuity and engineering disciplines. By discipline I mean rigorous design-then-test cycles. A human-rated flying machine is a difficult and expensive build if the goal is for people and equipment to return intact.  Unlike SpaceX who has launched much cargo, and among other things, a cheese wheel and a car, NASA has been launching people for a long time. Not to diminish the fine work of SpaceX or the other commercial efforts, it’s just that NASA takes a lot of heat for their deliberate pace.

Erie Airport, Colorado, from a hot air balloon at ca 2000′.

The last week has been a period of many modes of transportation. It’s been planes, trains, automobiles, ferry boats, and a hot air balloon. The nightmare of Seattle traffic is best forgotten. If you can avoid driving in Seattle during rush hours, do so.

If you can swing a hot air balloon ride, do it. Dig up some of that cash you have buried in the back yard and spend it. I found the ride to be absent any nerve wracking moments and to be quite a serene experience. There is no wind aloft and it is dead silent when the burners aren’t going. Do bring a hat, however. The burners are bloody hot.

Getting ready for a 4-balloon launch.

Like all pilots, balloonists enjoy low level flight.

The burners emit tremendous radiant heat. A wise passenger wears a hat for this reason.

 

 

 

One of my work duties is to give safety training on the principles of electrostatic safety: ESD training we call it. The group of people who go through my training are new employees. These folks come from all walks of life with education ranging from high school/GED to BS chemists & engineers to PhD chemists & engineers. In order to be compliant with OSHA and with what we understand to be best practices, we give personnel who will be working with chemicals extensive training in all of the customary environmental, health and safety areas.

I have instructed perhaps 80 to 100 people in the last 6 years. At the beginning of each session I query the group for their backgrounds and ask if it includes any electricity or electronics study or hobbies. With the exception of two electricians in the group, this survey has turned up a resounding zero positive responses.

Admittedly, there could be some selection bias here. It could be that people with electrical knowledge generally do not end up in the chemical industry. My informal observations support this. But I’m not referring to experts in the electrical field. I refer to people who recall ever having heard of Ohm’s law. One might have guessed that the science requirements for high school graduation may have included rudimentary electrical concepts. One might have further suspected that hobby electronics could have occupied the earlier years of a few attendees. Evidently not. And it does not appear that parents have been very influential in this matter either.

I’m struggling to be circumspect rather than righteous. It is not necessary for any given individual to have learned any particular field of study. It is not even necessary for most people to have studied electricity. But it is important for a core of individuals to have done so. So, where are they? And why aren’t more people curious enough to strike out on their own in the acquisition of electrical knowledge?

Back to electrostatics. In order to have a working grasp of electrostatic principles, the concept of the Coulomb has to be conveyed. Why the Coulomb? Because it is the missing piece that renders electrostatic concepts as mechanistic. It is my contention that a mechanistic grasp of anything can help a person to reason their way through a question. The alternative is rote memorization. The mechanistic approach is what drives learning in the natural sciences.

To be safe but still effective as an employee, a person needs to be able to discriminate what will and what will not generate and hold static charge to at least some degree in a novel circumstance. By that I mean how accumulated or stranded charge can form and what kind of materials can be effectively grounded. If you are working with bulk flammables, your reflexes need to be primed continuously to recognize a faulty ground path in the equipment around you. At the point of operation, somebody’s head has to be on a swivel looking for off-normal conditions.

It is possible to cause people to freeze in fear and over-react to unseen hazards like static electricity. But mindless spooking is a disservice to everyone. To work around flammable materials safely requires that a person understand and respect the operating boundaries of flammable material handling. Those boundaries are grounding and bonding (see NFPA 77), avoiding all ignition sources, good housekeeping, and maintaining an inert atmosphere over the flammable material.

Much of electrostatic safety in practice rests on awareness of the fire triangle and how to avoid constructing it.

Back to electrical education. There are numerous elements of a basic understanding of electricity that will aid in a person’s life, including safely working around flammable materials. One element is the concept of conduction and what kinds of materials conduct electric current. Another is the concept of a circuit and continuity. Voltage and its relationship to current follows from the previous concepts.

I would offer that the ability to operate software or computers is secondary to basic knowledge of how things work.

Connecting these ideas to electrostatics are the Coulomb and the Joule. One volt of potential will add one Joule of energy to one Coulomb of charges. One Ampere of current is one Coulomb of charges passing a point over one second. Finally, one Ohm is that resistance which will allow one Ampere of charge to move by the application of one volt.

For a given substance- dust or vapor- a minimum amount of spark energy (Joules) must be rapidly released in order to cause an ignition. This is referred to as MIE, Minimum Ignition Energy, and is commonly measured in milliJoules, mJ.

A discussion on sparking leads naturally into the concept of power as the rate of energy transfer in Watts (Joules per second), connecting to both the Joule and Ohm’s Law. Rapid energy transfer is better able to be incendive owing to the finite time needed for energy to disperse. Slow energy transfer may not be incendive simply because the energy needed to initiate and sustain combustion promptly disperses into the surroundings.

A discussion of energy and power is useful for a side discussion on how the electric company charges for energy in units of kilowatt hours (kWh). This is a connection of physics to money.

The overall point is that a rudimentary knowledge of electrical phenomena is of general use, even in the world of chemical manufacturing. I often hear people talk about the importance of “tech” in regard to K-12 education. By that they seem to say that using software is the critical skill.  I would offer that the ability to operate software or computers is secondary to basic knowledge of how things work. Anyone with a well rounded education should be able to learn to use software as they need it.


Addendum 8/16/18.  Since I wrote this essay, I’ve taught another 2 groups of trainees and not a single one of the 12 individuals could say that they had heard of Ohm’s law. All were high school grads over an age range of 22 to ~50. One had fresh BS Chem. E. degree.  Evidently none had enough inclination in their travels to noodle their way through a rudimentary grasp of volts, ohms, amperes and basic electronic components. I find this incredible given the penetration of electrical contrivances in our lives.

This feeds into a pet theory of mine that true expertise is being replaced with software skills. I know this because it seems to be happening to me as well. Is this an aspect of the Dunning-Kruger effect?

Dear Samsung,

I have owned a Samsung S6 smartphone for several years. Permit me to offer an appraisal of this device.

Satisfactory Attributes

  1. Satisfactory reliability
  2. Appearance, size, and weight.
  3. Fits in most shirt pockets for maximum personal utility.
  4. Several useful functions and features.
  5. A QWERTY keyboard for faster texting.
  6. Takes video and stills.
  7. Sends video and jpeg files.

Unsatisfactory Attributes

  1. Bad, bad ergonomics overall.
  2. Silicone protective cases prevent easy insertion into shirt pockets.
  3. No inactive margin on screen side by which to hold the phone without activating some feature.
  4. In general the worst ergonomics possible for a camera. It would be difficult to worsen the design.
  5. Subject to mandatory creeping featurism. This is a type of cancer.
  6. Screen difficult or impossible to see in outdoor daylight.
  7. Too many features. In this regard it resembles a universal kitchen tool. Eventually you realize that all you really wanted was to dice the potatoes.
  8. I frequently lose photographic opportunities because the f*cking camera was inadvertently toggled into some other mode, preventing activation of the “shutter”. See #3, this section. !%#@*&@#*&!

What do I really want?

  1. A flip phone that has a QWERTY keyboard, or
  2. A good purpose-built camera that offers basic telephony.

Why do I continue to use it?

  1. Expectation of accessibility by family, friends, and employer.
  2. Connection with friends and distant family via facebook.

Summation

Samsung, I pity you because you are stuck on the endless treadmill of ever increasing novelty. Because of this users are forced to adapt to updates of the Système du jour. I only wish that S6 purchase transactions would change in like manner. Listening to Samsung bitch about having to alter their enterprise system annually to accommodate the hidden needs of unknown organizations would bring a bit of cheer in a sadistic kind of way.

 

A lot of science is about trying to find the best questions. Because the best questions can lead us to better answers. So, in the spirit of better questions here goes.

By loosening environmental regulations aimed at pollution prevention or remediation, the mandarins reporting to POTUS 45 have apparently made the calculation decided that some resulting uptick in pollution is justified by the jobs created thereby.

Question 1: For any given relaxation in regulations that result in an adverse biological, chemical or physical insult to the environment, what is the limit of tolerable adverse effect?

Question 2: How will the upper limit of acceptable environmental insult be determined?

Question 3: Will the upper limit of acceptable environmental insult be determined before or after the beginning of the adverse effect?

For a given situation there should be some ratio of jobs to acceptable environmental damage.

Example: By relaxing the rules on the release of coal mining waste into a river, X jobs are created and, as a result, Y households are denied potable drinking water. What is an acceptable ratio of X to Y?

Those are enough questions for now. Discuss amongst yourselves.

I’m a fan of Gold Rush on the Discovery Channel and have been since the beginning. Aside from the producers constant over-dramatization and spreading the content a little too thin over the time block, I’d have to say that my main criticism would be with the miners themselves.

What I would throw on the table is the observation that there is a troublesome lack of analytical data supporting the miner’s choices of where to dig a cut. The few episodes where core samples have been taken, useful data was obtained and decisions made therefrom. But the holes were paid for grudgingly and the range covered too miserly. A sufficiently capitalized operation would be sure to survey the ore body and make the decision to bring in the heavy equipment on the basis of data.

Obviously they have been chronically short of capital for their operations. Fortunately for them, over the last 2 seasons they have been able to upgrade their wash plants, trommels, and earth moving equipment. Must be the TV connection.

But I suppose it is the very lack of capitalization that forms the dramatic basis of the show. Without scarcity there would be no drama. Without the conflicting personalities and dubious decision making there would be only a documentary on gold mining.

I have to imagine that the recent collapse in gold prices will get folded into the dramatic context in the next season.

I truly wish Parker Schnabel, the Hoffman crew, and the Dakota boys the best of luck in their efforts. What the viewers can’t see are the 10,000 details and problems that remain on the editing room floor.

Any questions?

Archives

Blog Stats

  • 571,348 hits

Archives

Blog Stats

  • 571,348 hits