You are currently browsing the category archive for the ‘Technology’ category.

Devon Energy has raised $900 million in cash from Sinopec Group for a stake in Devon shale gas plays. These gas projects include the Utica, Niobrara, and Tuscaloosa formations. 

What is interesting is not so much that China has bought its way into the extraction of a resource that the USA has in some abundance. What is more troubling is that China has bought its way up the learning curve in horizontal drilling and fracturing. 

According to the article in Bloomburg Businessweek-

China National Petroleum Corp., Sinopec Group and Cnooc Ltd. are seeking to gain technology through partnerships in order to develop China’s shale reserves, estimated to be larger than those in the U.S.

“In these joint ventures, the partner does typically get some education on drilling,” Scott Hanold, a Minneapolis-based analyst for RBC Capital Markets, said today in an interview.

So, the business wizards at Devon in OKC have arranged to sell their drilling magic to the Sinopec for a short term gain on drilling activity. Way to go folks. Gas in the ground is money in the bank. These geniuses have arranged to suck non-renewable energy out of the ground as fast as possible.  Once again US technology (IP, which is national treasure) is piped across the Pacific to people who will eventually use it to beat us in the market.  Score another triumph for our business leaders!!

The market is like a stomach. It has no brain. It only knows that it wants MORE.    Th’ Gaussling.

 It’s a banner day for American Business.

I’ve turned my attention to reaction calorimetry recently. A reaction calorimeter (i.e.,  Mettler-Toledo RC1) is an apparatus so constructed as to allow the reaction of chemical substances with the benefit of measuring the heat flux evolved. Reaction masses may absorb heat energy from the surroundings (endothermic) or may evolve heat energy into the surroundings (exothermic).

Calorimetry has been around for a very long time. What is relatively recent is the development of instrumentation, sensor, and automation packages that are sufficiently user friendly that RC can be plausibly used by people like me: chemists who are assigned to implement a technique new to the organization.  What I mean by “user friendly” is not this: an instrument that requires the full time attention of a specialist to operate and maintain it.

A user friendly instrument is one engineered and automated to the extent that as many adjustments as possible are performed by the automation and that the resulting sysem is robust enough that operational errors and conflicting settings are flagged prior to commencing a run.  A dandy graphic user interface is nice too. Click and drag has become a normal expectation of users.

An instrument that can be operated on demand by existing staff is an instrument that nullifies the need for specialists. Not good for the employment of chemists, but normal in the eternal march of progress. My impression is that RC is largely performed by dedicated staff in safety departments. What the MT RC1 facilitates is the possibility for R&D groups to absorb this function and bring the chemists closer to the thermal reality of their processes. Administratively, it might make more sense for an outside group to do focus on process safety, however.

In industrial chemical manufacture the imperative is the same as for other capitalistic ventures- manufacture the goods with minimal cost inputs to provide acceptable quality. Reactions that are highly exothermic or are prone to initiation difficulties are reactions that may pose operational hazards stemming from the release of hazardous energy.  A highly exothermic reaction that initiates with difficulty- or at temperatures that shrink the margin of safe control- is a reaction that should be closely studied by RC, ARC, and DSC.

It is generally desirable for a reaction to initiate and propagate under positive administrative and engineeering controls. Obviously, it is desirable for a reaction to be halted by the application of such controls. Halting or slowing a reaction by adjustment of feed rate or temperature is a common approach.  For second order reactions, the careful metering of one reactant to the other (semi-batch) is the most common approach to control of heat evolution.

For first order reactions, control of heat evolution is had by control of the concentration of unreacted compound or by brute force management of heating and cooling.

Safe operation of chemical processing is about controlling the accumulated energy in the reactor. The accumulated energy is the result of accumulated unreacted compounds. Some reactions can be safely conducted in batch form, meaning that all of the reactants are charged to the reactor at once. At t=0, the accumulation of energy is 100 %. A reliable and properly designed heat exchange system is required for safe operation (see CSB report on T2). In light of T2, a backup cooling system or properly designed venting is advised.

The issue I take with the designers of the process performed at T2 is this: They chose to concentrate the accumulated energy by running the reaction as a batch process. This is a philosphical choice. The reaction could have been run as a semibatch process by feeding the MeCp to the Na with a condenser on the vessel. Control of the exotherm could have been had by control of the feed rate and clever use of the evaporative endotherm. A properly sized vent with rupture disc should always be used. These are three layers of protection. 

Instead, they chose on a batchwise process relying on a now obviously inadequate pressure relief system, and the proper functioning of water to the jacket.

No doubt the operators of the facility were under price and schedule pressure. The MeCp manganese carbonyl compound they were making is an anti-knock additive for automotive fuels and therefore a commodity product. I have no doubt at all that their margins may have been thin and that resources may not have been there to properly engineer the process. This process has “expedient” written all over it in my view.

Reactions that have a latent period prior to noticeable reaction are especially tricky. Often such reactions can be rendered more reliable by operation at higher temperatures. Running exothermic reactions at elevated temperatures is somewhat counter-intuitive, but the issue of accumulation may be solved.  

Disclaimer: The opinions expressed by Th’ Gaussling are his own and do not necessarily represent those of employers past or present (or future).

The South Meadow generating station was operated by the Hartford Electric Company in Hartford, CT. The unit described in the 1931 Pop Sci article used 90 tons of mercury in the boiler. The article states that the South Meadow generator produced as much as 143 kWh from 100 lbs of coal, as opposed to an average of 59 kWh from conventional coal fired plants and 112 kWh from exceptionally efficient coal fired plants. The article describes an incident at the plant where a breech of containment from an explosion in the mercury vapor system occurred, releasing mercury and exposing workers to mercury vapor.

The Schiller Mercury Power Station in Portsmouth, NH, is described in this link.

I gave a talk in a morning I&EC session last thursday at the Denver ACS National meeting. During an interlude provided by a no-show speaker, a member of the audience began to quiz down a hapless speaker who earlier presented on the filtration of plasmids. The gentleman’s concern was this- We are continuing to develop conventional processing technology while fellows like Craig Venter are devising step-change techniques for genomic analysis and synthesis. People like Venter have their names mentioned in the same sentence with “synthetic biology”.  Why do we bother with the more primitive methods of research when the real action is with folks like Venter?

The inquisitive fellow was asking a rhetorical question to all of us. But the point he skipped over was the matter of intellectual property. He kept asking why don’t “we” just switch the paradigm right now and use such technology? Why continue with highly manual R&D?  The problem with his question was in the assumption that Venter’s technology was something that “WE” have access to. Venter’s technology does not automatically translate into a community tool. It is more like an item of commerce. In reality, this will likely represent a major uptick in productivity to the financial benefit of the intellectual property owners and licensees and their stockholders.

How the scientific workforce will fare is a different matter. Increased productivity usually means reduced labor per unit of output. I suspect that Venter’s technology represents a higher entry barrier to those who want to be in the market.  It may be that the outcome will be a broader range of diagnostic and treatment services available to a shrinking pool of insured people able to afford it.

Andrew Grove is the former CEO of Intel who was responsible for its transition from memory chip producer to microprocessor producer. According to Wikipedia, Grove is responsible for an increase of 4500 % in Intel’s market capitalization. In his youth he and his family escaped from Budapest, Hungary during the Soviet invasion of 1956. Groves holds a PhD in chemical engineering from UC Berkeley. Grove is now retired and is a senior advisor to Intel.  

Grove recently wrote an article for Bloomberg that is quite insightful in its analysis of certain aspects of American corporate culture. In aprticular, Grove notes the disconnect between US technology startups and the subsequent expansion of business activity leading to job growth. In particular, he notes that startups are failing to scaleup their business activity in the USA. The Silicon Valley job creation machine is powering down.

Grove makes an interesting point here,

A new industry needs an effective ecosystem in which technology knowhow accumulates, experience builds on experience, and close relationships develop between supplier and customer. The U.S. lost its lead in batteries 30 years ago when it stopped making consumer-electronics devices. Whoever made batteries then gained the exposure and relationships needed to learn to supply batteries for the more demanding laptop PC market, and after that, for the even more demanding automobile market. U.S. companies didn’t participate in the first phase and consequently weren’t in the running for all that followed. I doubt they will ever catch up.  Andrew Groves, 2010, Bloomberg

To build on what Grove is saying, I’ll embellish a bit and add that an industry is actually a network of manufacturers, suppliers, job shops, labor pools, insurers, bankers, and distributors. When deindustrialization occurs, the network of resources collapses. The middle class takes a big hit when a commodity network moves offshore. In the end, the intended market for commodity goods and services- ie., the middle class- is weakened by the very move that was supposed to keep prices down and profits up.

Grove is most concerned with the matter of scaleup. This is the business growth phase that occurs after the entrepreneurship proves its worth in the market place. Investors pour money ino large scale operations and staff to get product onto the market. Grove suggests that investment in domestic startups who do not follow on with domestic scaleup are not participating in keeping the magic alive.

Offshore scaleup negatively counteracts the benefit of domestic innovation. In a sense, it is an abdication of the trust given to the entrepreneurs by the citizens who provided the infrastructure to make the innovation possible.

Grove makes a good point in his editorial and I think that the rest of us need to take an active stance to question the facile analysis so often uttered by business leaders when it comes to relocation of business units offshore.  Citizens paid for the infrastructure and a large part of the education that makes our innovative technology possible. There needs to be more public pushback on business leaders and government officials about this topic.

Air France Flight 447 crashed in the Atlantic 400-odd miles outbound from Brazil to Paris after its evening departure from Rio de Janeiro on May 31st, 2009. While the flight data recorder has not been recovered, 24 fault messages were relayed to the AF headquarters via satellite. From these messages, and from forensic evidence found floating in the area of the crash site, a picture of the event is beginning to emerge. Spiegel Online has published an analysis of the disaster based on what is presently known.

The evidence collected so far suggests that the aircraft impacted the water on its belly with a 5 degree nose up pitch attitude. The calculated impact force based on certain kinds of material strength data is 36 g.  The aircraft departed just under max gross takeoff weight with 70 tons of kerosene fuel on board.  Abnormalities did not begin to appear until the aircraft was ostensibly at cruising altitude of ca 35,000 ft. There was a suspicious uptick in the OAT reading (outside air temperature) of a few degrees. Investigators believe that this is an indication of icing on the OAT sensor and pitot tube.

The aircraft may have been attempting to penetrate an area of thunderstorms in the inter-tropical convergence zone. This is a band of atmosphere on either side of the equator where northward and southward flows from the respective hemispheres meet and produce vertical air movement. The convergence of these flows can result in moisture laden air being lifted. Together with the natural buoyancy of warm humid air, vigorous convection cells can be kick-started into severe thunderstorms. The cloud tops in this zone can be substantially higher than those at the mid latitudes. At altitude, storm cells commonly produce icing conditions.

Out in the midocean spaces at night, airline pilots have only on-board radar and the moonlight, and perhaps a few pilot reports by others who have just been in the area, to estimate the areas of high storm intensity ahead. Flight through the intertropical convergence zone can produce bumpy rides to the point of violent turbulence. What most passengers don’t understand is that passenger jets are build to absorb considerable abuse before a structural failure occurs due to turbulence.

The upshot of the report is that the pitot tube that senses the airspeed of the aircraft failed due to icing.  This failure basically causes the computerized flight control system to shut down owing to lack of input of this key airspeed data. In flight control, airspeed is one of the very critical pieces of information necessary to sustain controlled flight.

Without airspeed information, and without computer assistance in the control of the various flight control surfaces, the modern passenger jet becomes very difficult to handle manually. The is especially true if the aircraft is under instrument conditions with low/no visibility and in high turbulence.

A complex and aerodynamically clean aircraft being jostled along all three axes at a high mach number presents a large workload for the pilots. At a mach number (o.85 or so) as typically attained in high altitude cruise, a sharp pitch down in the nose can lead to transonic flow over the control surfaces and in the engine inlet. This can lead to engine instability and loss of flight control. Sonic flows over ordinary flight surfaces can lead to flow separation and loss of control. This lesson was learned the hard way in the early days of high speed aviation. Pilots typically throttle back after penetrating turbulent air.

The investigators of AF 447 have all but concluded that the aircraft crashed owing to loss of critical airspeed information and subsequent departure from stable flight.  While the Spiegel article states that investigators are confident in this analysis, recovery of the flight data recorder will undoubtedly provide important details for refinement of the investitgation.

Most of my industrial life has been spent in what can be called a semi-batch processing world where products tend to be high value, low volume. Fine chemical products sold at the scale of 1 ton/yr or less can be produced in a campaign of less than a dozen runs in 200 gal to 1000 gallon reactors in a batch or semi-batch mode. Depending on the space yield, of course.

In my polylactic acid (PLA) days many years ago, we found ourselves necessarily in the monomer business. If you hope to introduce a new polymer to the market- a very difficult proposition by the way- you must be firmly in control of monomer supply and costs. Especially if the new polymer uses new monomers. New to the market in bulk, that is.

Our task in the scale up of polylactic acid was to come up with a dirt cheap supply of lactide, the cyclodimer of lactic acid. The monomer world is one of high volume, low unit cost.  By the time I had my tour of duty with PLA, a short tour in fact, much of the lactide art was tied up in patents. Luckily, my company had purchased a technology package that allowed us to practice.

Our method for producing lactide was a continuous process called continuous reactive distillation. Basically, a stream of lactic acid and strong acid catalyst was injected onto a middle plate of a 25 plate distillation column which stood outdoors. The column was atop a small bottoms reservoir containing heated xylenes.

The solvent xylene was heated in a reboiler which was located 20 ft away from the column assembly. Hot solvent was circulated in a loop between the bottoms reservoir and the reboiler. After startup the solvent built up an equilibrium concentration of lactide and a dogs lunch of oligomers.

At the injection point in the column, 85 % lactic acid and catalyst entered the middle of a multiple plate column that was charged with refluxing xylene vapor and condensate. While in the column the lactic acid esterified first as L2, the open chain dimer, then some fraction of it cyclodimerized to lactide.

 The water that was extruded by the esterification process was vaporized by the hot xylene and equilibrated up the column to the overhead stream and out of the column to a condenser.  When condensed it phase separated in a receiver called a “boot” that had a cylindrical bottom protuberance that collected the water. The  upper xylene phase was returned to the process.

Meanwhile, the xylene loop accumulated lactide and oligomers. The loop had a draw-off point where some predetermined percentage of the bottoms loop was tapped for continuous lactide isolation. This is where the fun began.

When cooled even just a little, the xylene phase emulsified. Badly. So, the trick was to induce a phase separation by forcing the emulsion through a ceramic filter. Here, the water phase and most of the oligomeric species were partitioned into a separate mass flow while the xylene phase was sent to a sieve bed for drying.

After passing through the sieve bed, the xylene phase was sent to the continuous crystallizer where it was chilled a bit to precipitate the lactide. A slurry of lactide solids called magma was then sent to a continuous centrifuge where the solids were isolated and the supernatant was returned to the bottoms loop.

The Achilles heel of the process was residual acid. Since the monomer and the oligomers are all acidic species, and the catalyst is nearly as strong as sulfuric acid, pulling the neutral lactide cleanly and cheaply from this acidic hell broth was a problem. So big in fact, that it eventually was the straw that broke the camels back. This shut our fledgling company down.

Residual acid in the monomer has a disastrous effect on the quality of PLA. It gives low MW product that is amber in color. The winning technology was the back-biting process for lactide production. It was applied by our competitors who won the battle and they (Dow-Cargill) went to market.

I’m going home now.  Just spent a few hours trying to make a parameter change its state on the GC side of  my spiffy new Agilent GC/MS.  Modern instruments are a confederation of subsystems that must give a thumbs up before a software magistrate will allow the instrument to initiate a run. If it is a hyphenated instrument, then all the more so.  All of the flow rates and temperatures and dozens of software settings must be in the proper state before the method can be executed. 

One of the first things you learn after acquiring a complex piece of apparatus is that the help menu is limited in scope.  The mere definition of a mode or a key or a parameter is hardly enough when an annunciator declares that the boat won’t move because the flippin’ gas saver mode is on. The gas saver feature is meant to reduce helium losses from the splitter when the instrument is idle.  What is especially irksome is when an obscure  feature declares that it suddenly can’t  play ball on the (N+1)th run.

My assistant is a truly gifted chromatographer.  She learned analytical lab management in pharmaceutical cGMP and EPA lab settings. What she can do with GC or HPLC is a thing of beauty.  I, on the other hand, have become a grumpy instrument Luddite. It’s not that I don’t like chromatography. In fact I really dig it.  What I get grumpy and dispeptic about is having to claw up the learning curve of yet another software package and then use it enough to retain some kind of fluency.

So, in order to save face with my staff, I have to figure this thing out myself.  Modern chromatographic instrumentation is now configured around the needs of documentation requirements. Creeping featurism. Long gone are the days of sauntering up to the instrument and jamming a sample in it without having to answer a lot of irksome questions about method names and directory gymnastics. Software packages are designed to provide a robust paper trail on the results of all samples injected. It’s all gotten very “Old Testament”.

What is needed is a simplified mode of operation for boneheads like myself. For my process development work I just want resolved peaks, a peak report, and – please god- mass spectra of the components. I do not need a fancy schmancy report. I just need some numbers to scribble in my notebook and report in order to understand what happened in the reactor.

So there it is. A lamentation on chemistry.

As any process development chemist knows, there is motivation to optimize a chemical process to produce maximum output in the minimum of reaction space. In the context of this essay, I’m referring to batch or semi-batch processes. Most multipurpose fine chemical production batch reactors have a capacity somewhere between 25 and 5000 gallons. These reactors are connected to utilities that supply heat transfer fluids for heating and cooling. These vessels are connected to inerting gases- nitrogen is typical- and to vacuum systems as well.

Maximum reactor pressure can be set as a matter of policy or by the vessel rating. Organizations can, as a matter of policy, set the maximum vessel pressure by the selection of the appropriate rupture disk rating. Vessel pressure rating and emergency venting considerations are a specialist art best left to chemical engineers.

Reactor temperatures are determined by the limits of the vessel materials and by the heat/chiller source. Batch reactors are typically heated or chilled with a heat transfer fluid. On heating, pressurized steam may be applied to the vessel jacket to provide even and controlled heating.  Or a heat transfer fluid like Dowtherm may be used in a heating or chilling circuit.

Process intensification is about getting the maximum space yield (kg product per liter of reaction volume) and involves several parameters in process design. Concentration, temperature, and pressure are three of the handles the process chemist can pull to increase the reaction velocity generally, but concentration is the important variable in high space yield processes.  Increasing reaction temperatures or pressures might increase the number of batches per week, but if more product per batch is desired and reactor choices are limited, then eventually the matter of higher concentration must be addressed.

The principle of the economy of scale says that on scale-up of a process, not all costs scale continuously or at the same rate. That is, if you double the scale, you double the raw material costs but not necessarily the labor costs. While there may be some beneficial economy of scale in the raw materials, most of the economy will be had in the labor component of the process cost. The labor and overhead costs in operating a full reactor are only slightly greater than a quarter full reactor. So, the labor component is diluted over a greater number of kg of product in a full reactor.

The same effect operates in higher space yield processes. The labor cost dilution effect can be considerable. This is especially important for the profitable production of commoditized products where there are many competitors and the customer makes the decision solely on price and delivery. Low margin products where raw material costs are large and relatively fixed and labor is the only cost that can be shaved are good candiates for larger scale and higher space yield.

But the chemist must be wary of certain effects when attempting process intensification. In general, process intensification involves increasing some kind of energy in the vessel. Process intensification through increased concentration will have the effect of increasing the amount of energy evolution per kilogram of reaction mixture.

Energy accumulation in a reactor is one of the most important things to consider when attempting to increase space yield. It is crucial to assure that process changes do not result in the accumulation of hazardous energy.

Energy accumulation in a reactor occurs in several ways. The accumulation of unreacted reagents is a form of stored energy. The danger here is in the potential for a runaway reaction. Accumulated reagents can react to evolve heat leading to an accelerated rates and eventually may open further exothermic pathways of decomposition. As the event ensues, the temperature rises, overwhelming the cooling capacity of the reactor. The reactor pressure rises, accelerating the event further. At some point the rupture disk bursts venting some of the reactor contents. Hopefully the pressure venting will result in cooling of the vessel contents and depressurizing the vessel. But it may not. If the pressure acceleration is greater than the deceleration afforded by the vent system, then the reactor pressure will continue to a pressure spike. This is where the weak components may fail. Hopefully, nobody is standing nearby. Survivors will report a bang followed by a rushing sound followed by a bigger bang and BLEVE-type flare if the system suffers a structural failure.

Energy accumulation can manifest in less obvious ways. Here is an example. Assume a spherical reaction volume. As the radius of the sphere increases, the surface area of the sphere increases as the square of the radius. The volume increases as the cube of the radius. So, on scale-up the volume of reaction mixture (and heat generation potential) will increase faster than the heat transfer surface area. The ratios are different for cylindrical volumes, but the principle is the same. Generally the adjustment of feed rates will take care of this matter in semi-batch reactions. Batch reactions where all of the reagents are added at once are where the unwary and unlucky can get into big trouble.

Process intensification via increased concentration may have deleterious effects on viscosity and mixing. This is especially true if slurries are produced and is even worse if a low boiling solvent is used. Slurries result in poor mixing and poor heat transfer. Low boiling solvents may be prone to cavitation with strong agitation, exacerbating the heat transfer problem. Slurry solids provide nucleation sites for the initiation of cavitation.  Cavitation is difficult to detect as well. The instinct to increase agitator speed to “help” the mixing may only make matters worse by increasing the shear and thus the onset of cavitation.

Denser slurries resulting from process intensification are more problematic to transfer and filter as well. Ground gained from higher concentrations may be lost in subsequent materials handling problems. Filtration is where the whole thing can hang up. It is important for the process development chemist to pay attention to materials handling issues before commiting to increased slurry densities. Crow is best eaten while it is still warm.

Disclaimer: Combichem or HTE is definitely not my area of expertise. It is, therefore, inevitable that I’ll say something blindingly ignorant about it. Despite my admitted ignorance, is appears to me that there is something happening, some kind of phase shift, in the small molecule discovery marketplace that is of general interest to the chemical R&D community. In fact, it may just be part of an overall change in how we do chemistry in general.

I’ve been hearing no small amount of buzz from chemists in the job market about the flattening or even downturn of US pharma R&D in general and of combichem or High Throughput Experimentation (HTE) in particular.  It is not that HTE is in any particular danger of extinction, but rather certain companies who offer the equipment platforms and tech packages seem to be evolving away from supplying equipment as a core business activity. Many of the big customers who could afford the initial cash outlay for HTE technology are doing their work in-house, dampening the demand for discovery services by HTE players at their aggressive prices.

One company I know has evidently shifted emphasis into the drug discovery field rather than try to continue marketing HTE equipment.  Near as I can tell, they are betting that having their own drug candidates in the pipeline is a better strategy than being strictly a technology or R&D services supplier. Time will tell the tale.

What the honchos in the board rooms of America’s big corporations forget is that the art they export so profitably was in all likelihood developed by people educated in US taxpayer subsidized institutions with US government grants. American citizens subsidize the university research complex in this country and by extension, supply a brain subsidy to industry. To export chemical R&D is to subsidize the establishment of a similar R&D capacity in other nations.  I think if you poll most US citizens, they’ll say that this is not the outcome they expected.

Software for HTE has become a derivative product that, for at least one HTE player, is proving to be rather successful. It isn’t enough to have the wet chemical equipment to make hundreds and thousands of compounds. You must be able to deal with the data storm that follows.

The business of HTE technology is evolving to a mature stage as the market comes to understand how to make and lose money with it.  There is always a tension between “technology push” and “market pull”.  It is often easier to respond to concrete demand with existing tools that to get new adopters to invest in leading edge tools to discover risky drug or catalyst candidates.

The extent to which the US chemical industry (all areas, including pharma and specialties) is outsourcing its R&D or simply moving it offshore is distressing. R&D is our magic. And promoting its execution offshore is to accelerate the de-industrialization of the USA.  It is folly to train the workers of authoritarian nations like China to execute your high art. American companies must learn to perform R&D in an economically accessible way and keep the art in-house. 

What makes R&D so expensive in the USA? Well, labor for one thing. In the end, our dependence on expensive PhD’s to do synthesis lab work may be a big part of our undoing. But there is much more to it than that. Look at the kinds of facilities that are built for chemical R&D. In the US and EU they are usually very expensive to build and maintain. Regulations and litigation avoidance are trending industry in the direction of ever more complex and high-overhead facilities in which to handle chemicals and conduct research. 

Then there is the cost of every widget and substance associated with chemistry. Look at the pricing in the Aldrich catalog or get a quote from Agilent. Have a look at the actual invoice from your latest Aldrich order and look at the shipping cost. High isn’t it? We’ve accelerated our demand for ready-made raw materials and hyphenated instrumentation. To what extent are we gladly buying excess capacity? Who doesn’t have an instrument with functions and capabilities that have never been understood or used?

It is possible to conduct R&D under lean conditions. But it can’t be done cheaply in existing industrial R&D campuses. Cost effective R&D will require a recalibration for most chemists in terms of the kinds of working conditions and administrative services they expect. But business leaders will have to recalibrate as well. Prestige can be manifested in product quality and a sense of adventure and conviviality rather than in an edifice. There are companies all over the world doing this every day. They set up shop in a commercial condo or old industrial building with used office furniture and grubby floors. What matters in chemistry is what is happening (safely) in the reactor. Everything else is secondary.

Archives

Blog Stats

  • 539,876 hits