Beliefs and actions

What’s new?

Researchers at NIST have developed synthetic COVID-19 material which can be made with specific concentrations of the disease and which can be safely handled. This material can be used to determine the sensitivity of tests for COVID-19: how well does the test detect the presence of COVID-19 at different concentrations?

What does it mean?

For many years I taught university courses on probability and decision analysis. The mathematics behind COVID-19 testing is, of course, Bayes’s theorem, and that piece of mathematics is an insightful way of looking at medical diagnosis and public health decision making. Here is an example.

If a physician thinks a patient has a particular disease and gives a test to the patient, three numbers are necessary to determine the meaning of the test outcome for that patient. Let’s assume that, based on observations of the patient’s symptoms and before having the test administered, the physican thinks there is a 75% chance the patient has the disease; that is the first number. The test itself is described by two other numbers. The sensitivity of a test is the probability the test gives a positive result (that is, says the patient does have the disease) when the patient actually has the disease; we want that number to be high and let’s assume our test has a sensitivity of 95%. Specificity is the probability the test gives a negative result when the patient actually does not have the disease; we want that number to be high and let’s assume our test has a specificity of 99%.

Think of sensitivity and specificity by considering a person you suspect to have green/red color blindness. How well that person detects the colors is described by the person’s ability to say “green” when presented with an object that is actually green and the person’s ability to say “red” when presented with an object that is actually red. How well does the person detect the real color? How well does the test detect the true health status of the patient? Note that a person who always says “green” (whether shown a green or red object) and a medical test that always says “disease” (whether the disease is actually present or not) has perfect sensitivity (100%) but perfectly awful specificity (0%). A good test has high numbers for both sensitivity and specificity.

In my example, I write three statements shown in the three bullets below, using the notation “disease” (in quote marks) to indicate that the test says the patient has the disease. Note that the event “disease” (the test says the patient has the disease) is not the same as the event disease (the patient actually has the disease).

  • P(disease) = 0.75, which implies that P(no disease) =0.25
  • Sensitivity: P(“disease”|disease) = 0.95, which implies that P(“no disease”|disease) = 0.05. The first notation is read as the probability the test says the patient has the disease (“disease”), given the patient does actually have the disease (disease).
  • Specificity: P(“no disease”|no disease) = 0.99 which implies that P(“disease”|no disease) = 0.01

The calculation using Bayes’s theorem is:

Thus, if the test says the patient has the disease, the physician is now almost sure (99.7%) that the patient has the disease. A similar calculation shows that if the test says the patient does not have the disease, the physician is then almost sure (99.0%) that the patient does not have the disease.

If 10,000 patients like this patient take the test, the results, on average, are:

 “Disease”“No disease”Totals
Disease71253757500
No disease2524752500
Totals7150285010000

Of the 10,000, 7125 will be correctly identified as having the disease, while 25 will receive a false positive, that is, they will be told they have the disease when they actually don’t. Another 2475 will be correctly told they do not have the disease, but 375 will receive a false negative, that is they will be told they don’t have the disease when the actually do have it.

I love math. I want to show you many more examples (with graphs!) because Bayes’s theorem is an amazing piece of math. But more than my love of the math itself, I love the usefulness of math so I will move on to the useful implications.

I have used Bayes’s theorem to model the thinking of immunohematologists (how do the results of blood tests revise beliefs concerning the antibodies present in a patient’s blood), geologists (how do test results revise beliefs concerning the likelihood of seismic activity in an area proposed for storage of nuclear waste), and insurance agent managers (how does the behavior of insurance agents revise beliefs concerning their loyalty to the insurance company).

However beautiful and useful the math in revising beliefs, the most important perspective to remember is that beliefs drive actions. What blood should the immunohematologist recommend for transfusion into the patient, where should nuclear waste be stored, and what action should a company take concerning an agent’s disloyalty?

With COVID-19 tests, action is complicated. Because of the lag in getting test results in many parts of the country and because of poor or unknown specificity and sensitivity of COVID-19 test results, a person cannot use the results to guide immediate action, nor can contact tracers use the results to know whose contacts to trace, nor can policy makers use the results to decide what public health measures should be taken.

What does it mean for you?

Bayes’s theorem is a way to think about revising one’s beliefs. One starts with prior beliefs, that is, initial beliefs about the presence of some factor (the disease or the market attractiveness of a new product) based on current information. Then, in the face of new information (a medical test or a test of the new product in one market), one needs to revise one’s beliefs.

That revision should be based on the ability of that test to detect the factor: if the factor really is present, can the test detect it and if the factor really is not present, can the test detect that? Is the test market a good indication of whether the product will be successful in other markets? The test is evaluated by its ability to detect reality. We want a test with 100% sensitivity and 100% specificity but, unfortunately, with some diseases (for example, Alzheimer’s disease), the only perfect test is an autopsy. I used to live in Columbus, Ohio, which, I was told, was an excellent test market for fast food items: if a product was a success there, it would succeed nationally and if it failed there, it would fail nationally.   

Whenever there is uncertainty (that is, all the time), you won’t always be right. Any test has false positives and false negatives. That statement means the quality of a decision can only be judged by whether it was a good decision at the time it was made, not by whether it had good outcomes or not.

Beliefs drive actions, but a decision about actions must also consider the possible consequences of actions. With a different set of three numbers, the Bayes’s theorem calculation may leave a physician unsure whether or not the patient has a disease, but the physician may choose to administer a treatment anyway if early treatment is known to have an excellent outcome while lack of treatment often results in death.

One action leads to a series of actions. In decision analysis, we use a decision tree to represent a series of chance events and decision points into the future. A physician may try a treatment and then, depending on the patient’s results, continue that treatment or try a different one. Taking one action now may leave available later actions as options; a different action now may close off other later actions.

I taught an entire course (actually two entire courses) on decision analysis, But this piece is already longer than my usual writing. I would love to go on and on, but I will stop here: decision analysis is a very useful framework with many insights to offer decision makers.

Where can you learn more?

Regarding the formation of possessives, I come down on the side of Bayes’s theorem not Bayes’ but that dispute rages on.

Several calculators for Bayes’s theorem are available: here, here, and here. This one allows for more than two hypotheses (disease and no disease) and more than two test outcomes (“disease” and “no disease”).

Bayesian methods are increasingly used in sophisticated AI methods; see this example in machine learning.

Decision analysis software can help in modeling decisions. TreeAge is my favorite.

There are many books that serve as good introductions to decision analysis: Handbook of Decisions Analysis,  Introduction to Decision Analysis by David C Skinner, and Choices by Michael D. Resnik. My favorite is still the 1968 classic by Howard Raiffa.

The American Academy of Family Physicians provides this overview of COVID-19 testing. This article discusses the sensitivity and specificity of COVID-19 tests for COVID-19 antibodies (not for the COVID-19 virus).

Your electric future

Source: Wikimedia

What’s new?

Colorado’s Public Utility Commission and other Colorado government agencies have opened discussions with the Colorado companies that provide natural gas service about future plans for large natural gas infrastructure, as described by Allen Best in Mountain Town News this week.

What does it mean?

Mr Best explains that gas infrastructure is expensive and long lasting. The discussions will be based on the long term effects of large decisions, looking out 10 to 20 years. Limiting new installations of natural gas facilities will help the state meet ambitious goals to reduce emissions of greenhouse gases; a 2019 Colorado law requires the regulatory agencies to act to support these goals.

He places this action in the context of a movement called beneficial electrification. As explained by the Environmental and Energy Study Institute: “Beneficial electrification (or strategic electrification) is a term for replacing direct fossil fuel use (e.g., propane, heating oil, gasoline) with electricity in a way that reduces overall emissions and energy costs. There are many opportunities across the residential and commercial sectors. This can include switching to an electric vehicle or an electric heating system – as long as the end-user and the environment both benefit.”

The long term goal of moving completely away from fossil fuels to renewable energy increasingly looks like electrification because electricity can be generated economically from renewable sources such as solar and wind. While the COVID-19 crisis has slowed the electrification trend somewhat, the US Energy Information Agency and analysis by the financial asset management firm Lazard both state that renewable energy has won the cost battle. “In other words, it is now cheaper to save the climate than to destroy it.”

I am proud to point out the huge contribution of engineers in this achievement. The trend is driven by improvements in the devices and physical systems, improvements in the manufacture of those items, and improvements in the use of those items. For example, wind power has improved with better designs for wind towers, turbines, and all their components; more efficient manufacturing of all of those items leading to lower costs; and better designs for operating electrical grids to maximize the use of the wind when it is available. Furthermore, electrified devices can incorporate sophisticated controls that optimize their operation to provide high user satisfaction and low energy use. My two favorite engineering specialties, industrial engineering and mechatronics, deserve a lot of the credit. Industrial engineering drives the improvement in manufacturing and mechatronics drives the use of controls of devices and systems.

The National Academy of Engineering cites electrification as the greatest engineering achievement of the 20th century, and the book Networks of Power: Electrification in Western Society, 1880-1930, by Thomas P. Hughes, makes the case for the huge societal changes that ensued. David Nye’s Electrifying America: Social Meanings of a New Technology emphasizes even more the social impact of this technology. The current second wave of electrification is likely to have equally large impacts.

Much work remains to be done.  As skeptics point out, the sun does not always shine and the wind does not always blow, so storage technology is essential for this electrified future. The cost, range, and usefulness of electric vehicles will depend crucially on battery technology and, thank you again to the engineers, that technology is improving at a rapid pace. It is not “if,” it is “when” electric vehicles will be cheaper than vehicles with internal combustion engines. Other technologies are also on the way to or have already achieved economic feasibility: utility scale energy storage, efficient grid management, and advanced building design. My favorite not-so-crazy idea is the increasing use of direct current, rather than alternating current (yes, I think Edison was right, not Tesla).

Regulatory issues, the topic where I started this article, remain important, especially concerning the financial incentives felt by consumer of electricity.

What does it mean for you?

Consumer uses tend to dominate the discussion of beneficial electrification, including home heating, water heating, electric vehicles, and so forth. Some of those technologies apply in business settings and increasingly your offices and other buildings will rely on electricity.

Industrial uses present more of a challenge, especially processes that require large surges of power for short time periods. But if the steel mill in my town can move to solar power, other industrial processes can and will do the same. The mill has already transitioned to using only recycled steel and now is leading in the electrification trend. Electric forklifts, induction heating, and efficient heat pumps are already here.

Where can you learn more?

This March 2018 report from Lawrence Berkeley National Lab lays out the potentials for electrification, with an emphasis on the industrial and commercial sectors. Saul Griffith is a power voice advocating for the electric future and he is creating companies that move us there. McKinsey & Company argue that companies should be factoring electrification into all of their capital spending plans.

All the pieces

“This Norton cutter-grinder which had been specially adapted to grind cams on the motor shaft for the electric dry shaver which this New England plant normally produces, has now been converted to grind permanent magnet rotors for machine tool motors. The conversion was accomplished with new jigs and fixtures and slight changes in the head. This is a tricky job well suited to the skill of this plant’s workers. The metal is alnico and the octagonal shape consists of surfaces which are arcs drawn from the center of the piece.
Schick Inc., Stamford, Connecticut” 1942
Source: Library of Congress. https://www.loc.gov/pictures/item/2017690845/

One of my favorite online magazines, Modern Machine Shop, published its third article since 1990 on the tool and die making company C&A Tool Engineering Inc.

What’s new?

Actually, there is nothing new. The technology is well established. The management principles are not revolutionary. The worker training is standard.

What does it mean?

A tool and die maker makes the tools, dies, fixtures, jigs, molds, and other physical objects that are needed to produce the physical objects sold to customers. For example, plastic buttons are made either by injecting plastic into a mold in the shape of the desired button or by cutting a sheet of plastic into the desired shape which is then polished. In the first case, the process starts by making a mold, into which the plastic will be injected; in the second case, the process starts by making a die, which is the tool used to cut the plastic sheet. A fixture holds an object in place during the manufacturing process. A jig guides an object while it is moved in the manufacturing process.

The objects made by a tool and die maker are thus custom made for the particular production process.  The object may be a variation of previously made object, but its making requires knowledge and skill often built up by years of apprenticeship and experience. Also these objects often must be made to meet exacting specifications.

Tool and die work is highly skilled and such makers require respect and independence in order to do their work. A tool and die maker uses computer controlled machines, but the work cannot be automated because each item is custom made. A tool and die shop is a highly sophisticated job shop, but it is still a job shop, not a mass manufacturing process.

Unusually, C&A is a tool and die company, but also does production work, that is, it also makes products for customers, not just the tools that make the products. Usually companies focus either on tool and die work or on production work.

Over years of growth, C&A has designed its plants and processes well, considering carefully the layout of the buildings, for example, to promote flow of product (as much as possible in a job shop), to promote the useful interaction of people, and to support future expansion.  Machines are selected carefully and upgraded to keep up with developments in the industry; the company C&A has, for example, has moved into additive manufacturing. C&A has also watched its business side carefully, and has moved from producing automotive parts and surgical instruments into producing medical devices and aerospace parts. C&A training is done with the local vocational training school and on-the-job apprenticeship; cross-training is emphasized.

But what really strikes me about this company is its ability to combine principles of efficiency, profit, and decency, as seen in this excerpt from the MMS article, explaining why C&A mixes tool and die work with production work.

“The mix was good for the people in the shop, and what is good for the people in the shop is good for business. Tool and die work builds skills; production jobs build proficiency. Tool and die work can be challenging and a change of pace; production jobs provide continuity and just enough routine.”

One of the earlier MMS articles has this quote from Dick Conrow, the company founder: “If people know their jobs and have the right tools for the job, then all I have to do is get out the way. I don’t have to run the place.”

And a 2012 article in Northeast Manufacturing News outlined the C&A philosophy on upgrading equipment: “C&A’s general philosophy is to purchase equipment before there is necessarily a specific need for it,” enabling the company to learn and be prepared for jobs it otherwise could not take on.

What does it mean for you?

This company doesn’t do anything that any other company can’t do, but C&A puts all the pieces together and has done so successfully for 51 years. In the three MMS articles, author Mark Albert (Editor Emeritus, Modern Machine Shop) attributes C&A’s success to its adherence to principles, such as treating workers with respect rather than trying to control them, organizing the work to support teams,  and using production work to continue to learn about tool and die work.

The founder, Mr Conrow, retired in 2018, but the new manager is continuing to use those principles as the company moves further into “digitalization, globalization and integration.” The leadership vision for C&A involves knowing how to establish and hold onto important principles, while adapting those principles to new realities. Whatever the mission of your organization, the important role of leadership is to identify and follow guiding principles, while knowing when and how to adapt and change.

Some caveats are necessary. I yield to no one in skepticism, so it may be that the workers at C&A would have a different tale to tell about the company’s workplace environment, but I also yield to no one in my optimism that such work places can and do exist. C&A was sold in 2018 to a Japanese company, MinebeaMitsumi Group. Will C&A be able to hold to the principles established by Mr. Conrow?

Where can you learn more?

The three MMS articles about C&A are here:

The company’s web page also has more information.

It’s only a model

Source: author

What’s new?

My colleague Bill Thomas of EJB Partners called my attention to an article published by McKinsey & Company on June 25 titled “Demystifying modeling: How quantitative models can – and can’t – explain the world.” Highlighting the role of modeling during the COVID-19 crisis, the four authors describe the powers of models and the pitfalls to avoid when using models. The article is really excellent and I recommend you go read it before finishing my piece.

What does it mean?

Models come in many forms: mental models, physical models, mathematical models, simulation models, and more. A model is a representation of reality that can be analyzed to derive conclusions that improve one’s understanding of reality. I hesitate to make this sweeping statement, but the ability to make and use models, as a type of tool, seems to me to be a definition of human thinking abilities. For 40 years, I taught engineering students the various models that engineers find useful in designing objects and systems to make the world better – from the equation F=ma to the M/M/1 queuing model. One of the most important phrases I taught them was: “It’s only a model.”

The McKinsey & Company is short (go back and read it if you haven’t yet). Given the article’s focus on COVID-19 models, you also might want to look at this tool that my state of Colorado has made available for citizens to explore the effect of different behaviors on spread of the disease.

While short, the McKinsey & Company covers all the important points about models. Models are useful in clarifying which drivers matter, determining how much an input can matter, and facilitating discussions about the future. A model can’t fix bad data, assumptions and simplifications must be examined, and users should not expect too much certainty.

“The purpose of modeling is insight, not numbers” is a quote attributed to many people; I first heard it attributed to cybernetics expert Ross Ashby. Experiments can be performed on a model more easily, more cheaply, and more quickly than on the real world, where you may only get one chance to see, for example, how the COVID-19 pandemic evolves. Insight means an understanding of how the various parameters interact to create the outcome, but insight is not a forecast. As baseball expert Yogi Berra said, “It’s tough to make predictions, especially about the future.” The McKinsey & Company authors, in a sidebar, describe how they use scenarios, which are not meant to be forecasts, but are meant to support discussion of the implications.

What does it mean for you?

I would only fault the McKinsey & Company article for omitting the politics and power involved in creating a model, and the fact that people need to trust the modelers, not just the model, if they are not to  reject the model’s findings that don’t agree with their existing beliefs. You need to ask questions about any model: who made it and what assumptions went into it, but also what are the explicit and implicit goals of those who made the model.

The Colorado tool is open source, with the code posted on Github, a repository for software, especially open-source software. The documentation tab provides a link to more information, including the names and qualifications of the people who created the model. Other links point to a more detailed description of the model including assumptions such as: the incubation period is 4.2 days with 1 day before that for presymptomatic infectiousness, 1800 ICU beds are available in Colorado, recovered individuals are assumed to remain immune to infection, and no cases of COVID-19 are imported or migrate from outside Colorado. Changing the parameters of this model quickly convinced me that modest improvements in social distancing (three parameters) and the proportion of population wearing masks (one parameter) could crush the epidemic in Colorado by September. I was very surprised by how small the necessary changes are. “Wear a damn mask,” Governor Polis said recently. The model fits my existing beliefs, but others may not be persuaded. I am not here to debate, only to note the people who make the models are often seeking to make a point, whether in the public arena or in a private company. The creators of this Colorado model certainly want people to practice social distancing and to wear face masks.

One goal in creating a model is always a robust model, that is, a model with conclusions that do not vary much if the key assumptions of the model are tweaked. For example, what happens to the results if the incubation period is 3.2 or 5.2 days instead of the assumed 4.2 days? Also, the Colorado model assumes a single number for that parameter, the same for everyone; more sophisticated models would use probability to model that parameter. A model has to be stressed by using sensitivity analysis: how much in migration of COVID cases would be needed to change the conclusion I reached above that only small changes in social behavior are needed to make us safe?  Unfortunately, I see those cars with Texas license plates, although I also saw them a lot before COVID-19.

The key to the use of a model is to let the modelers argue and make them revise the model again and again. “What if …” should be the starting words in almost every question you ask. This model is overly simple, meant to be used by the public, but more sophisticated models are available.

A model is simply an extended argument, a case made to support conclusions. The argument may be in equations or a computer program, but it is an argument and can be examined and questioned just as one can do with any argument. The strength of a model is that it is explicit, even if sometimes complicated. A model, like Colorado’s COVID-19 model, can also be used to generate scenarios, which in turn can be used for planning. Colorado’s health care system can plan for a worst case scenario.

The problem with any model is that it is only a representation of reality, not reality itself. I taught my students to say “It’s only a model,” said with a shrug. A model is a representation of reality, and thus fails to capture some aspects of reality.

Where can you learn more?

The forecasts of some more sophisticated COVID-19 models are summarized here by the website fivethirtyeight and here by the CDC. The McKinsey & Company article has a sidebar with four suggested articles to learn more about COVID-19 modeling.

An editorial at this link discusses the differences between and potential use of two COVID-19 models. The article makes wise recommendations about the use of models. In particular, the point of analysis is to support decision making about action, and models can recommend different, but reinforcing actions to take.

New shoes

Source: History of the Incorporation of Cordiners in Glasgow by William Campbell, 1883

What’s new?

In a 1 July 2020 article in the magazine Additive Manufacturing, Senior Editor Stephanie Hendrixson describes how Flowbuilt Manufacturing makes customized shoes for various brands, using biometric scanning, pressure plates, 3D printing, and injection molding.  

What does it mean?

A technological idea may be around for a while, waiting for the actual technology to catch up, making the idea real. The fax machine was invented in 1843 but had to wait for advances in scanning and transmission speed to become useful in the 1980s (and to be superseded by the Internet in just a few decades). My father (a systems engineer at Bell Labs) was part of a team that developed the first useful multiplexing system for sending multiple conversations on one channel (TASI, used in the first transatlantic telephone cable); it was based on ideas that had been around for decades, but could only be implemented at the speed of conversation when transistors were invented.

Similarly, the idea of mass customization has been around for a long time. Dell was, I think, in 1984 the first personal computer manufacturer to offer the consumer the option of customizing the components of the computer, and consumers now routinely expect to be able to do that (although Dell has moved away from allowing customization on some models). Mass production keeps costs low by manufacturing identical products at a low cost, while customization of a product for the particular needs of the customer is usually done at a higher cost. The phrase “mass customization” was an oxymoron, not a real possibility, in which customization is done at a low cost.

Shoe manufacturing was originally a customized process (my fourth great grandfather John Gentle was a shoemaker in Glasgow, Scotland, and deacon of the Incorporation of Cordiners in 1808-1809). The industrial revolution changed shoe making into a mass production process. However, shoe manufacturing still involves hand work and thus most shoes sold in the US are made overseas to take advantage of lower labor costs.

Mass customization has now become more widespread. Eyewear, clothing, face masks, hearing aids, motorcycle helmets, baby ultrasounds, wedding cake toppers and horse saddles, are now made with some combination of scanning and customization. The trend is likely to continue as the component technologies and the associated processes are improving and are increasingly integrated.

For mass customization to become reality, the technology and the processes for the supply chain, manufacturing, and sales all had to evolve to the point where mass customization can be profitable for the companies and attractive for consumers. The technology includes sensors (scanners and pressure sensors in the case of shoes), software to process and integrate the resulting data into the individualized product design, and 3D printing of molds or of the product itself.

Those technologies have to be integrated with business practices. Mass customization means usually means supply chains are shortened so manufacturing may be reshored. Chuck Sanson, Flowbuilt Manufacturing’s director of business development, is quoted in the Additive Manufacturing article as saying “Personalized products are not something that you can effectively manage from 6,000 miles away.” The market must be analyzed for the potential for customization; at the same time, the product must be analyzed into modules that that will be customized or not customized (for example, will the box for shipping the item be standard or customized?). New materials may be appropriate.   Inventory management must be rethought, including relationships with suppliers, management of the inventory of customizable and noncustomizable components and handling of consumer returns. Production must be rethought as a pull system for the customized parts, which usually means a tightly integrated information system for tracking parts and products. The relationship with the final customer may be rethought to involved selling through intermediaries or direct to the customer.

What does it mean for you?

The focus has to be on what customization can add for your customers. What are the features of your product that your customers will want to design or have designed for them? At the same time, you need to consider which features can be manufactured at low cost.

The path to mass customization can be to slowly transition or to reimagine the entire process at once. Flowbuilt took the second path, being established in 2018 by its parent company Superfeet, a specialty shoe and insole manufacturer, with the task “to come up with a better way of making shoes within the United States,” but it evolved from a Superfeet project with HP that created custom-made 3D printed insoles. The customization drove the need to reshore the manufacturing to shorten the supply chain.  Flowbuilt also thought carefully about what parts of the manufacturing process could be automated and yet still remain flexible to allow small run production for diverse customers.

While I have described mass customization for consumer products such as shoes, similar changes have affected the manufacturing of other products, such as custom gears.

Where can you learn more?

An Internet search for the phrase mass customization will turn up many resources. An influential 1997 article from Harvard Business Review describes four ways to think about mass customization. One of the authors of that article, B. Joseph Pine II, wrote a 1993 book on Mass Customization, and cowrote (with James H. Gilmore) the 2000 book Markets of One, although Mr. Pine now focuses on what he calls the experience economy. A large amount of research on mass customization has been complied in the two-volume Handbook of Research in Mass Customization and Personalization.

The customer side of mass customization is related to the concept of long tail marketing, in which a company seeks to supply many diverse customer markets. The manufacturing side of mass customization is related to Industry 4.0, or smart manufacturing.

Operations research

What’s new?

In the 12 June issue of Defence Connect reports the former chief of the Royal Australian Navy, Tim Barrett, is quoted as saying that the regeneration of the Australian fleet over the next decade “is an ideal opportunity for Australia to make significant changes to structure and strategy – not just in terms of the fleet itself, that is, but how deployments are analysed. To that end, he calls for a `thinking navy’, arguing the OR [Operations Research] is a crucial piece of this puzzle.”

What does it mean?

Operations research is, of course, research on operations. INFORMS (the Institute for Operations Research and the Management Sciences) states “Operations research (O.R.) is defined as the scientific process of transforming data into insights to making better decisions.” INFORMS pairs OR with Analytics, adding, “Analytics is the application of scientific & mathematical methods to the study & analysis of problems involving complex systems.”

Operations research began with military applications.  The above picture shows my copy of the first textbook on OR, Methods of Operations Research, by Philip M. Morse (my academic grandfather, that is, he was the PhD advisor of my PhD advisor) and George E. Kimball (1950, MIT Press and John Wiley & Sons). The two authors were members of the Operations Research Group of the U.S. Navy and the first version of the book was published as a classified document just after World War II.

The first sentence of the book defines OR: “Operations research is a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control.” Morse was a professor of physics and Kimball of chemistry, so they were familiar with the scientific method; they and others contributed to the war effort by applying the scientific method to improve operations. Presciently, they remarked “experience since the war has shown [that] the techniques and approach of operations research can be of help in arriving at executive decisions concerning operations in any field, industrial and governmental as well as military.” Indeed, many parts of engineering started with military engineering and only later became civil engineering, as seen in the legacy of that term.

This book is a nice introduction to OR. On page 3, the authors give their first simple example still often cited by OR researchers and practitioners:

The first example, simple to the point of triviality, involves the line-up of soldiers washing their mess kits after eating at a field mess station. An operations research worker during his first day of assignment to a new field command noticed that there was considerable delay caused by the soldiers having to wait in line to wash and rinse their mess kits after eating. There were four tubs, two for washing and two for rinsing. The operations research worker noticed that on the average it took three times as long for the soldier to wash his kit as it did for him to rinse it. He suggested that, instead of there being two tubs for washing and two for rinsing, there should be three tubs for washing and one for rinsing. This change was made, and the line of waiting soldiers did not merely diminish in size; on most days no waiting line every formed.

They point out several features of this story. “[T]he solution, when seen, was absurdly simple …” The improvement required no additional equipment. The solution was conveyed to someone who could make the needed change – and did. Finally, the waiting was reduce to almost zero when the flow should have increased by 50 percent; waiting lines have the property that “the longer they get, the longer they tend to get,” a “self-aggravating property” present in many system. This story and the analysis still make me smile, almost 50 years after I first read them.

They describe other examples of OR that require more analysis and more technical background, including optimizing the depth setting of antisubmarine depth charges to improve the sinking of U-boats and setting the size of convoys to reduce average ship losses. They discuss the problem of finding the problem, sensitivity analysis, and more.

Since that book, OR has expanded greatly. INFORMS lists, among others, the following techniques and subfields: algorithms, databases, decision analysis, dynamic program/optimal control, facilities planning, forecasting, game theory, inventory management / production planning, optimization / mathematical programming, probability and stochastic models, quality and reliability, queueing models, scheduling, search and surveillance, simulation, systems thinking, time series methods, and utility and value theory. These techniques of OR have wide application.

Linear programming, a technique for optimization, involves choosing values for specified decision quantities to maximize or minimize a function of those quantities; the chosen values must also satisfy certain constraints (equations or inequalities).  All functions in the mathematical formulation are linear in form concerning the decision variables.

For example, in 1945 George Stigler described the diet problem in which the amounts of foods in a diet are chosen to minimize the cost of the diet while meeting the minimum daily requirements of different nutrients. When I was a faculty member at Ohio State in the 1980s I worked with some agricultural engineering faculty; Ohio State produced a program to help dairy farmers optimize the feed for cattle using the linear programming formulation of the diet problem. In 2012, Ohio State professor Dr Luis Morales published a paper updating the diet problem for cattle to include consideration of methane emissions from cattle.

OR problems are often formulated in mathematics and the solution methods are often sophisticated. George Dantzig, later a professor of operations research at Stanford, formulated the military planning problems he worked on during the war as a linear programming model. He eventually developed a method for the solution of such problems, called the simplex method.

The linear programming model and the simplex method are just one example of an OR problem and associated solution method with wide applicability. Successful application of linear programming models (and generalizations) include assigning jobs to machines, scheduling of crews for airlines, routing products from production facilities to warehouses to retail locations, selecting the highest value way to cut a log into lumber, scheduling jobs through a production process, minimizing the distance travelled to deliver meals in a Meals-on-Wheels program, and so forth. The use of linear programming is often taught in business programs, especially MBA programs.

Other methods of OR incorporate uncertainty by modeling using the mathematics of probability. All OR methods require data about the real world system. The Defence Connect article I started this article with quotes the Head of US Naval Air Forces Vice Admiral DeWolfe Miller as saying “I love data,” one of my favorite sayings.

What does it mean for you?

This week I attended the online national conference of the American Society for Engineering Education (ASEE), the international organization where engineering faculty present research, discuss, and learn how to improve engineering education. A nice paper presented there (“Creating a Community of Practice for Operations Research by Co-creating a High Impact Executive Education Program in India,” by Venugopalan Kovaichelvan and Patrick A Brunese,) described a program at a company in India, in which senior managers from the Indian company worked with faculty from a US university to create three modules delivered to students over a 10 month period using online, on site, synchronous and asynchronous modes of delivery, combining learning with immediate application to supply chain problems. The topics focused on sensing and framing problems, developing a model for study, selecting appropriate modeling methods and data, applying the methods, interpreting results, implementing and validating the solution, and developing a comprehensive framework for decision support. Students worked on projects they identified (for example, reorganizing a distribution network for a particular product) and their work was rigorously assessed. Graduates are supported in a community of practice and 32 senior managers are now qualified as advanced OR practitioners. Savings from the 14 initial projects provided “one-time monetary benefits equivalent to the investment for the entire development and delivery of the advanced OR program.”

The first delivery of the program was done by US faculty and the second iteration is being delivered by participants from the first program. A social learning program is supporting the 60 members of the community of practice.

I started this article with a quote from Tim Barrett former chief of the Royal Australian Navy, including his call for a “thinking navy.” Seventy years ago, in the final chapter of Methods of Operations Research, Morse and Kimball wrote:

“Referring again to the first sentence of Chapter 1 [the definition of operations research], we may emphasize at this point that operations research is not a pure research activity separated from all else; it is an integral part of an operating organization. It is a part of the thinking process of the operating organization, so to speak, the summing up of the facts bearing on the problem before a decision is made. Separate existence, by itself, would be as anomalous as the separate existence of the front lobe of a brain without the rest of the brain and body.” [emphasis added]

Thinking doesn’t just occur with OR methods, but the habits of OR certainly do promote thinking – logical thinking based on data. Is your organization a thinking organization? How do you promote the rigorous identification and solution of problems? OR may be a part of the strategy you use to answer those questions.

Where can you learn more?

The papers from the 2020 conference of the American Society for Engineering Education will soon be available here.

INFORMS has excellent information about Operations Research.

The Library of Congress listing for Methods of Operations Research is here.

It’s all just a simulation


A double pendulum simulation made with Python
Source: https://commons.wikimedia.org/wiki/File:Double_pendulum_simulation_python.gif

What’s new?

In 2006, the British magazine New Scientist published an article by philosopher Nick Bostrom titled “Is Reality a Simulation?” in which he repeated an argument he first made in 2003: that we may all be living in a computer simulation. The article continues to reverberate and create discussion so New Scientist republished the 2006 article in its June 6-12, 2020, issue and included it in its new Essential Guide: The Nature of Reality.

What does it mean?

A simulation is a model of a real object. A computer simulation is a model implemented in computer code. Examples of simulations include computer games (such as Sim City), training exercises (such as disaster planning exercises), and some engineering tools (such as Solidworks),

A model or simulation is useful for performing experiments that would be costly, time consuming, and disruptive if performed on the real world system. In my field, industrial engineering, discrete event simulations are used to model the flow of products through production facilities, enabling the performance of experiments to determine, for example, the increase in product flow with the addition of a new machine or more staff (the phrase “discrete event” means the program steps through time simulating each event as it occurs). Many simulations incorporate random factors, so the simulation is run many times to estimate the probabilities of the various outcomes of the simulation (these are called “Monte Carlo simulations”, after the famed casino).

Bostrom’s argument is sophisticated, but put simply he argues (1) that it is likely that somewhere in the universe civilizations have developed technologically beyond our current state to the point where they can create sophisticated simulations including simulated minds that are conscious, (2) that such civilizations would be interested in creating simulations of their ancestors’ lives, and thus, (3) given the vast size and time scale of the universe, such simulations must have been created many times. Thus, Bostrom concludes, we are more likely to be living in a simulation than to be living in reality.

What does it mean for you?

I remember my college philosophy professor (many, many decades ago) telling my Introduction to Philosophy class that we might be the subject of an immersion experiment in which everything we experienced was simply the result of the artificially created world we lived in. He then posed the question: how would we know if we were in an experiment or the real world? I don’t remember the conclusion, or even any of the discussion, but obviously the question stayed with me. Over the years I have decided that, if the world is a simulation, it’s a very well done one and it’s all I have, so I might as well get on with “life.” I basically say to myself: I’m an engineer; let’s get on to practical topics.

Because simulations are practical, their use is growing. Using massive amounts of data to simulate the physical processes expressed in partial differential equations, numerical simulation is widely used to create weather forecasts that give probabilities over a range of possible outcomes. Simulations of the stock market are used to assess the viability of an investor’s portfolio. Simulation can be used to predict the spread of a disease in a population.

Simulation is widespread in gaming. I played an early game called Rogue (loosely based on Dungeons & Dragons), with treasure, monsters, and magic items, including the demonically named Boon of Genocide (the recipient can wipe out all of one kind of monster for the remainder of the game). Games are often at the forefront of the development of computer technology and these games have contributed to the development of simulations for learning and for decision making.

Simulations allow the user to perform experiments on the simulation, and more generally, simulation is a learning environment. Through repeated use of the simulation, the user may be able to develop intuition that normally would take a human many years to acquire. However, the intuition acquired is intuition about the results from the simulation, which, whether the user realizes it or not, may not always match the results from the real world. Daniel P. Huffman recently reminded us that in the game SimCity, crime is “treated very much as a natural consequence of population growth” and the solution is easy: pay for a police station and “all residents are happier and everything gets better.” Current events show that you can learn the wrong lessons from a simulation. The use of simulation in policy is especially fraught with this risk. From Industrial Dynamicsthrough Urban Dynamics and World Dynamics to Limits to Growth, policy makers have to be careful of built in assumptions of simulations, sometimes described as Malthus in, Malthus out or Malthus with a computer.

I tell my students that engineers use many models, but that we always must remember, “It’s only a model,” (said with a shrug of one’s shoulders). Of course a model cannot represent all aspects of the real world and sometimes a model will be wrong.

The opposite can also occur, in which the simulation is assumed to be wrong because the user rejects its actually accurate findings concerning the real world. I wrote a simulation many years ago to predict the pension costs for a company; I spent at least a week trying to find an error because the model predicted a seemingly high number of deaths among the employees. My boss and I both agreed there had to be an error. I finally called Human Resources and asked how many employees died last year; the number was close to what my model predicted. Our intuition about the real world was wrong and the simulation was right.

Simulation takes many forms. I use an online simulation of a Quincunx (or Plinko machine) to demonstrate the central limit theorem to engineering students. The animation of a simulation is always eye-catching and can give the user intuition about the system, but the important conclusions from a simulation usually come from analysis of the numbers generated by the simulation. Solidworks, a solid modeling package, can simulate the assembly of parts and can predict the stresses that parts will experience. A crop simulator simulates the growing of crops, enabling the user to test different crop management practices as climate changes. Simulation of the transportation system of a company can help with fleet management and logistics. Simulation plays an important role in Computer-Aided Drug Design.

Engineers use computer simulations as well as physical simulations such as a crash-test dummy, a shake table to test the effects of earthquakes on a building design, and the San Francisco Bay Model, used to study tidal flows. Physical simulations are also used in training, for example, in health care, aviation, and fire-fighting

The growth in computing power, the increasing use of sensors, and improvements in computer graphics have made simulations even more useful and seductive.  A digital twin is a simulation of a particular object, usually updated frequently with data from sensors on the actual object. A digital twin of an object can be used in a simulation of a larger system, interacting with other digital twins to allow experimentation and prediction. More generally, agent based simulation involves writing computer code to describe the behavior of the components of a system and then letting them interact in the larger system; these researchers used the method to plan responses to a zombie invasion of Chicago.  Virtual reality is an immersive simulated experience (my philosophy professor’s hypothetical “is it real?” situation).

The range of current and potential applications of simulation is staggering. No matter what your organization does, a simulation may exist already, although it will need to be adapted to your particular situation. Many simulation packages or general purpose simulation languages are available in free versions and some are open-source; you can even try out a simple simulation in a spreadsheet. Ask yourself what experiments you would like to perform on your organization; a simulation may be the way for you to do those experiments and deepen your understanding and intuition.

As simulations continue to improve, their use will spread. Perhaps some day soon we will be able to create simulations with conscious minds. Or maybe our descendants have already.

Where can you learn more?

Lists of simulation software are available at Wikipedia, Capterra, and SourceForge.

General purpose simulation companies often have case studies that may spark your thinking: see AnyLogic, Arena, and Simio, as examples.

While historical in focus and academic in style, this special issue of a journal from Springer-Verlag gives an overview of simulation.

Answer: What is artificial intelligence?

File:Watson Jeopardy.jpg
Source: https://en.wikipedia.org/wiki/Watson_(computer)

What’s new?

The computer program called Watson has been largely a failure in IBM’s plan to have it provide personalized advice to doctors treating patients with cancer.

What does it mean?

In 2011 the computer program Watson crushed the greatest human stars of the TV quiz show Jeopardy. Full disclosure: I am a big fan of Jeopardy. For those not in the know, the show involves three contestants who push a buzzer after a clue is read by the host; the first to buzz in must give the answer in the form of a question. Topics cover a wide range of history, science, art, literature, and popular culture, often including word play such as puns.

The Jeopardy rules were bent for Watson, with the episode being taped at the IBM Research Center, not at the usual TV studio. Certain categories of questions were omitted: audiovisual clues and clues that require explanation of how to interpret the clue. Also, the clues were transmitted to the computer in text, not orally. Speech recognition is still a tricky task for computers and this concession can be viewed as giving the computer a large advantage. Watson, like its human competitors, was not allowed to access the Internet during play.

Jeopardy players report that buzzer skills count at least as much as knowledge. Often all three players will know the correct reply, and the player who can buzz in quickly, perhaps anticipating the host’s cadence in reading the clue, will win the money. Human players are notified that they can buzz in by the appearance of a light, but Watson was notified by an electronic signal, again perhaps giving advantage to the machine. Watson was required to press a buzzer as the humans did, but Watson could, when highly confident “hit the buzzer in as little as 10 milliseconds, making it very hard for humans to beat,” as reported by the New York Times. The questions used in the match were not at high level for Jeopardy meaning that buzzer skills probably weighed heavily in the result.

About 20 researchers took three years to develop Watson. The components of Watson were designed for the specific Jeopardy task. The team identified types of Jeopardy questions and determined the language that would indicate the type of question (e.g. Factoid). Its knowledge base, compiled from Wikipedia, encyclopedias, and some databases of specific information was structured in various ways to aid quick retrieval. Wikipedia was prioritized as a source because analysis had shown that about 95% of Jeopardy answers are in the titles of Wikipedia pages. Watson had different components working in parallel to generate candidate answers which were then evaluated for confidence. The developers used the work of others (including some open-source programs) to develop these components. Other components decided whether to buzz in, which square to pick next, and the amount of wager to place on a Daily Double or in Final Jeopardy.

At the time of its Jeopardy win, Watson was touted by IBM as holding promise in more serious applications. The Guardian, for example, reported “IBM plans to use Watson’s linguistic and analytical abilities to develop products in areas such as medical diagnosis.” And the New York Times reported

“For I.B.M., the future will happen very quickly, company executives said. On Thursday it plans to announce that it will collaborate with Columbia University and the University of Maryland to create a physician’s assistant service that will allow doctors to query a cybernetic assistant. The company also plans to work with Nuance Communications Inc. to add voice recognition to the physician’s assistant, possibly making the service available in as little as 18 months.”

What does it mean for you?

Answering questions posed in natural language is a hard task for computers and the IBM researchers should be congratulated for their achievement. But AI has history of hype and many are skeptical of IBM’s purpose in creating Watson. Some argue that IBM, not doing well in its core business, used Watson as a marketing tool, not as serious science. IBM has a history of doing publicity catching projects, which it calls Grand Challenges, such as the chess machine that beat Garry Kasparov in chess in 1997. As an academic, I looked for, and could not find, a statement of the contribution to the advancement of the theory of Artificial Intelligence (AI) by the creation of Watson. IBM, of course, argued that their purpose was to advance application, especially in medicine, but the results have been disappointing.

Some of the information needed to decide on a medical diagnosis, such as lab results and measurements of vital signs, is easily used by a computer program, but much of the information is in unstructured notes from doctors. Watson has, some think, the potential to help with such problems, but has not been successful. An April 2019 article in IEEE Spectrum says that there have been no peer reviewed papers of consequence showing a contribution to medical care by Watson. The article also describes that IBM efforts to help with advice in oncology were stymied by Watson’s inability to extract the important information relevant to treatment from the vast array of literature.

Watson has been more widely used outside the US, but again perhaps based on marketing wins. “Many of these hospitals proudly use the IBM Watson brand in their marketing, telling patients that they’ll be getting AI-powered cancer care.”  Actual results from those hospitals don’t seem to support  a claim that the program offers a high level of care.

During the Jeopardy match, Watson failed in some laughable ways. For example, after an incorrect human response of “What are the ‘20s?” Watson buzzed in and offered “What is the 1920s?” The failure came from the fact that Watson was not programmed to listen to the previous answers. Again, computer programs are marvelous at the task they are programmed to do – and only at that task. But more puzzlingly Watson answered “What is Toronto?” in a Final Jeopardy category of U.S. Cities.

Some AI researchers will argue that progress is being made, and patience is required before these approaches demonstrate value. But climbing a tree is not the first step in sending a human to the moon. Does the ability of computer systems to perform on a quiz show mean that such programs are on the way to truly intelligent behavior? Perhaps the answer does not matter. In any useful application of advanced computing, the program is tailored to a specific task and only needs to be good at that task. A program that optimizes the routing of jobs in a factory doesn’t need to know how to tie its own shoes.

Before Watson competed on Jeopardy, IBM and the Jeopardy show had long negotiations leading to a version of Jeopardy tailored to Watson in many ways. AI may be able to eventually contribute to medical care, but only after the medical environment is changed to be more conducive to computer approaches. Electronic records are a step toward making a patient’s record more accessible for a computer, but a doctor’s notes, even while no longer hand-written, can still be ambiguous or confusing. Context matters and computers are very bad at understanding context. Just as many believe that self driving cars will only succeed in a carefully controlled driving environment, perhaps with no human driven vehicles on the same roads, the medical data collection system may need considerable change to become Watson friendly.

I think that the term “artificial intelligence” distracts us from the progress being made in using computers to aid human endeavor. Philosophers and engineers have spent decades arguing over whether computers exhibit intelligence. With every AI achievement, the goal posts are moved. Human experts have all fallen to computer programs in checkers, chess, and Go. With the win on Jeopardy, the cry becomes “When Watson wins “Dancing With The Stars” or even “The Amazing Race,” I’ll be impressed.”

The important fact is that computers can reliably deliver amazing results for a narrowly defined task. However, the results are only as good as the programmer’s foresight in anticipating all the situations that may arise even in that narrowly defined task. When programs fail, they can do so in ways that baffle the humans. Artificial stupidity seems amply demonstrated.

Where can you learn more?

“Jeopardy! as a Modern Turing Test: Did Watson Really Win?” explains the AI approaches used in creating Watson. The IBM Watson Research Team described the technical aspects of Watson in the 2010 Fall issue of AI Magazine.

My discussion of the failures in using Watson in medical care relies heavily on an April 2019 article from an IEEE Spectrum.

The strongest arguments against the current methods of artificial intelligence come from philosopher John Searle in the Chinese room thought experiment and from the Dreyfus brothers, the late Berkeley philosophy professor Hubert and Berkeley engineering professor Stuart (full disclosure, I studied with Stuart Dreyfus for my PhD in industrial engineering) in their various books, including Mind over Matter.

Is there a standard for that?


Harris & Ewing, photographer. Tire testing, Bureau of Standards. The Library of Congress.

What’s new?

The US National Institute of Standards and Technology (NIST) published a paper comparing the ability of five photon detectors to produce a measurable outcome when hit by a photon, that is, a quantum of light.  

Minnesota recently became the first state to adopt IEEE 1547-2018, a standard from the Institute of Electrical and Electronics Engineers that describes criteria for the interconnection of electrical power systems and distributed energy resources.

Harold O’Connor, a goldsmith from Salida, Colorado, is the author of The Jeweler’s Bench Reference.

What does it mean?

Improved photon detection is important in the development of lower dose imaging of human tissues and in quantum cryptography to improve the security of data networks. The ability to accurately count photons may eventually form the basis for a new standard for measuring optical power.

The future of the electrical grid will involve energy generation and storage in many locations and from many sources, including renewable energy such as wind and solar; the safe, reliable, and efficient management of such a grid requires standards for the electrical interconnection of these many sources.

O’Connor’s book is a standard reference for jewelers because it includes clear descriptions of jewelry methods.

What does it mean for you?

Whatever technology your organization relies on, someone has already developed or is developing standards. You don’t have to go it alone; you don’t have to reinvent the wheel; some very smart people have thought through how to apply the technology and often that information is freely available. With any new technology, you should ask this important question: is there a standard for this technology?  

For example, a manufacturer of helmets for skiing or snowboarding may want its products to meet the ASTM F2040 standard, which includes requirements for strength and stability. The standard also refers to other ASTM standards that specify testing methods. ASTM, formerly known as the American Society for Testing and Materials, publishes voluntary standards, and has a range of membership options with the highest level allowing participation in technical committees. New standards are being developed by ASTM for gym equipment, mechanically stabilized earth walls, and racket sport eye protectors.

Standards exist for many technologies, but also for any situation where people agree on a procedure or where people want to use a procedure developed by experts. For example, Generally Accepted Accounting Principles (GAAP) were developed by the Financial Accounting Standards Board, which “is recognized by the Securities and Exchange Commission as the designated accounting standard setter for public companies.”  Those concerned with financial accounting can participate in the actions of the FASB by seeking appointing to various committees. The FASB activities are overseen by a seven-member Board which seeks to foster independence and the public interest.

As shown by this example, standards are usually set by an industry supported organization, in which members of that industry can participate. That organization may be recognized by governments as the appropriate entity to set those standards. That same organization or other organizations may audit and certify adherence to the standards. Clients or customers may want the products or services they buy to adhere to those standards.

In your particular business, professional magazines and organizations should keep you informed about relevant standards, but for a function that is not central to your business you may not be aware of the existence of relevant standards. For example, the ISSA (originally the International Sanitary Supply Association, now the Worldwide Cleaning Industry Association) is a source for cleaning standards, audits, and corrective actions. Does your janitorial service adhere to such an industry standard?

Many standards are voluntary while others are prescribed by governments, as with Minnesota’s adoption of the IEEE standards, which I cited at the beginning of this article. Even voluntary standards may be effectively required for being involved with commerce in certain industries. ISO 9001 certification (the certification for an organization’s quality system) is widely perceived as required for international trade. Mead Metals states that obtaining ISO certification in 1998 “was a key factor in expanding the company’s national and international customer base.” Even if not an official standard set by any organization, your technology may have widely accepted standards, such as O’Connor’s book on jewelers’ techniques.

Some standards are not freely available. The IEEE standards must be purchased, as must standards from ISO, the International Organisation for Standardization, which works with 164 international member organizations and has created and published 22,919 international standards, on topics from assistive products to zinc alloys. ANSI, the American National Standards Institutes, is the US member organization. For some standard setting organizations, selling copies of the standards is an important source of revenue to support the work of setting standards.

The dark side of standards is the saying: “Standards are great; everyone has one,” as illustrated in this XCSD cartoon.  While standard setting bodies want to portray the process as benefiting the public good, standards can create winners and losers, and thus are often the result of power politics. Standard setting for the Internet has been a contentious process and the development of standards for the new  cannabis industry is still at the beginning stages. My father (a systems engineer at Bell Labs) told me stories about his endeavors at CCITT (the international standard setting body for telephony), including the determination of the international standard for the shape of the hash tag or octothorpe symbol on the telephone keypad.  

Where can you learn more?

The NIST article on photon detectors is “Calibration of free-space and fiber-coupled single-photon detectors,” by Thomas Gerrits, Alan Migdall, Joshua C Bienfang, John Lehman, Sae Woo Nam, Jolene Splett, Igor Vayshenker, and Jack Wang.

IEEE standards are available here, ASTM standards here, and the ISSA Clean Standard here.  Harold O’Connor’s book The Jeweler’s Bench Reference is available here. The International Organisation for Standardization (ISO) has technical committees in many areas.  Also see their list of other bodies developing standards or guides.

In exploring for this column, I discovered two books I now have on my reading list:

New materials and sensors everywhere

What’s new?

Researchers at the US National Institute of Standards and Technology (NIST) have added a fluorescent material to fiber reinforced plastics to enable the detection of damage to the material over time.

What does it mean?

Fiber reinforced plastics are one type of composite material increasingly used to make lightweight strong components, such as airplane, automobile, boat, and building components. Fibers (carbon and glass are commonly used) are embedded in a plastic material, called a matrix, sometimes with the fibers aligned to add strength.

While such composites offer many benefits, they can deteriorate over time as the matrix and embedded fibers separate. In a 2005 incident, the rudder on an Airbus 310 broke off during a flight due to such separation. The pilots were able to recover control of the plane and land successfully with no injuries to occupants. Visual inspection had not detected any problem with the rudder. Other issues may have been involved in this incident, including a change in the sensitivity of the control system and possible aggressive use of the rudder by the pilot.

The NIST researchers have added small molecules, called mechanophores, that fluoresce after the impact of mechanical force such as what occurs when tiny cracks appear between the fiber and matrix. Fiber reinforced plastics with mechanophores can then be easily scanned for interior cracks. NIST cites the possible use in detecting cracks in wind turbine blades.

What does it mean for you?

The new technology highlights progress in materials, trends toward embedded sensors, and the always present need to consider the people in the system.

All engineered materials are composites. Consider concrete, made from cement which is “manufactured through a closely controlled chemical combination of calcium, silicon, aluminum, iron and other ingredients”, then mixed with water and other materials, and cured into a hard, rock like substance which humans have used for thousands of years. Useful metals (steel, aluminum, cast iron) are all alloys, with different alloys in different quantities yielding metal alloys with different useful properties. Even a wood I beam is an engineered product, with solid sawn lumber joined to board made by using adhesives and compression to solidify layers of wood strands. Progress in almost every field of technology depends on advances in materials. Increasingly physics and chemistry are supplemented by biology, for example in organic photovoltaics, hemp reinforced plastics, and organic-inorganic composites in biomedical applications. Advances in the science and engineering of composites are improving the technology that will enable decarbonization of the economy through renewable energy for generation of electricity and through improved energy storage.

In automobiles, the transition from carburetor to fuel injection, the addition of emission controls, and improvements to occupant comfort all rely on the ubiquity of sensors and computation. The Internet of Things and Industry 4.0 incorporate the exchange of data and the increasing use of computation, but the first requirement is always sensors to collect the data. Sensors can measure light, heat, pressure, motion, sound, moisture, magnetic field, and in fact almost any physical property.  Sensors can replace, literally, the canary in the mine to keep people safe underground and remote sensing from a satellite in space can be used to assess crops on earth.

No matter what field your organization is in, I guarantee that new materials and increasing use of sensors is affecting and will continue to affect your field. Many advise consumers not to buy the first year of a redesigned car and an issue with new technology is to find that sweet spot between being the early adopter (said to be at the bleeding edge) and being the laggard. I tend to be a late adopter (I was the last person I knew to buy a microwave), but you need to think about the technology strategy for your organization. What are the key types of technology that drive your organization? Who is monitoring the environment for new advances in that technology?

Finally, some evidence in the Airbus 310 incident indicates that pilots had not be told enough about changes to the rudder and potential interactions with how the pilot might use the rudder. The application of radar in World War II is a well known story about how technology supported war efforts, but less well known is the role of operations research in improving the use of radar by improving the operators’ techniques. Any technology is part of a system of technology and human; the use of the technology by the humans can amplify or undermine the usefulness of the technology.

Where can you learn more?

The report by NIST is here.

Mostly we engineers are going to take care of these developments for you. Scientists and engineers working on new materials publish in many journals. The NIST researchers published their work in the journal Composites Science and Technology. Recently published articles in that journal covered topics such as the behavior of 3D braided composites at high temperatures, prediction of the fatigue life of a specific type of laminates, and methods to improve the strength of the interfaces of carbon fiber-epoxy composites.  

You can track the implications for your field through your own professional associations by making sure your organization monitors new products, through industry publications and meetings. I love learning about these new developments, so for over 50 years, I have read New Scientist, a weekly magazine with mostly very short articles on developments in all fields of science and technology. For example, an April article describes the use of vanadium dioxide, with added tungsten, printed in a grating to make smart windows that adjust to control how much light is emitted during the day. Blocking heat from near-infrared light reduces the cooling needs of building fitted with such windows.