The Power of Truth® has been released for sale and assignment to a conservative pro-American news outlet, cable network, or other media outlet that wants to define and brand its operation as the bearer of the truth, and set itself above the competition.

In every news story the audience hears of censorship, speech, and the truth. The Power of Truth® has significant value to define an outlet, and expand its audience. A growing media outlet may decide to rebrand their operation The Power of Truth®. An established outlet may choose to make it the slogan distinguishing their operation from the competition. You want people to think of your outlet when they hear it, and think of the slogan when they see your company name. It is the thing which answers the consumer's questions: Why should I choose you? Why should I listen to you? Think:

  • What’s in your wallet -- Capital One
  • The most trusted name in news – CNN
  • Fair and balanced - Fox News
  • Where’s the beef -- Wendy’s
  • You’re in good hands -- Allstate
  • The ultimate driving machine -- BMW

The Power of Truth® is registered at the federal trademark level in all applicable trademark classes, and the sale and assignment includes the applicable domain names. The buyer will have both the trademark and the domains so that it will control its business landscape without downrange interference.

Contact: Truth@ThePowerOfTruth.com

Itching can be uncomfortable, but it’s a normal part of your skin’s immune response to external threats. When you’re itching from an encounter with poison ivy or mosquitoes, consider that your urge to scratch may have evolved to get you to swat away disease-carrying pests[1].

However, for many people who suffer from chronic skin diseases like eczema, the sensation of itch can fuel a vicious cycle[2] of scratching that interrupts sleep, reduces productivity and prevents them from enjoying daily life[3]. This cycle is caused by sensory neurons and skin immune cells[4] working together to promote itching and skin inflammation.

But, paradoxically, some of the mechanisms behind this feedback loop also stop inflammation from getting worse. In our newly published research, my team of immunologists and neuroscientists and I[5] discovered that a specific type of itch-sensing neuron can push back on the itch-scratch-inflammation cycle[6] in the presence of a small protein. This protein, called interleukin-31, or IL-31[7], is typically involved in triggering itching.

This negative feedback loop – like the vicious cycle – is only possible because the itch-sensing nerve endings in your skin are closely intertwined with the millions of cells that make up your skin’s immune system[8].

Your skin has its own immune system.

An itchy molecule

The protein IL-31 is key to the connection between the nervous and immune systems. This molecule is produced by some immune cells[9], and like other members of this molecule family[10], it specializes in helping immune cells communicate with each other.

IL-31 is rarely present in the skin or blood of people who don’t have a history of eczema, allergies, asthma or related conditions. But those with conditions like eczema that cause chronic itch have significantly increased skin production of IL-31[11]. There is strong evidence that IL-31 is one of a small set of proteins that immune cells produce that can bind directly to sensory neurons and trigger itching[12]. Small amounts of purified IL-31 injected directly into skin or spinal fluid leads to impressively rapid-onset itching and scratching[13].

However, when my colleagues and I induced rashes in mice by exposing them to dust mites, we found that itch-sensing neurons turned down the dial on inflammation at the site of itching instead of promoting it. They did so by secreting small molecules called neuropeptides[14] that, in this context, directed immune cells to respond less enthusiastically. In sum, we had discovered an inverse relationship between itching and skin inflammation, tethered by a single molecule.

But if IL-31 triggers itching, which can worsen inflammation by making patients scratch their skin, how does it reduce inflammation?

We found the answer to this paradox in a little-known function of sensory neurons called neurogenic inflammation[15]. This nerve reflex triggers sensory neurons to release various signaling molecules directly into tissues, including specific neuropeptides that promote signs of inflammation[16] like increased blood flow to the skin. Neurogenic inflammation acts within the same nerves that transmit sensory information like itch, pain, touch and temperature, but differs by the path it takes: away from the brain rather than toward it.

We discovered that IL-31 can induce neurogenic inflammation, mapping a direct pathway[17] going from IL-31 through sensory neurons to repress immune cells in the skin. When we engineered mice to be unresponsive to IL-31, we similarly found that they had more activated skin immune cells that produced more inflammation. This means the net effect of IL-31 is to blunt overall inflammation.

Profile of child with pacifier in mouth and eczema rash on throat
Eczema’s vicious cycle of itch-scratch-inflammation can significantly affect a patient’s quality of life. SBenitez/Moment via Getty Images[18]

IL-31 as potential treatment

Our study shows that IL-31 causes sensory neurons in the skin to perform two very different functions[19]: They signal inward to the spinal cord and brain to stimulate an itching sensation that typically leads to more inflammation, but they also signal back out to the skin and quell inflammation by inhibiting certain immune cells.

Although paradoxical, this makes evolutionary sense. Scratching an itch can feel very satisfying but doesn’t have much utility in the modern world where we’re more likely to suffer from compulsive scratching than encounter stinging nettles. In contrast, unchecked inflammation underlies many chronic autoimmune diseases. Therefore, turning off an immune response in inflamed tissue can be as important as turning it on.

Our discoveries raise important questions about the implications of modifying IL-31 to treat different diseases. For one, it isn’t clear how IL-31-sensing neurons interface with other neuronal circuits[20] that also regulate skin inflammation. Furthermore, some patients have higher levels of allergic proteins[21] in their blood or develop asthma flares[22] when taking existing drugs that target IL-31. IL-31 is also found in some lung and gut cells – how and why would an itch-inducing molecule be present in internal organs?

Anatomical niches where sensory neurons and immune cells converge are present throughout the human body. If an itchy molecule like IL-31 can use neuronal circuitry to dampen an immune response in the skin, similar molecules like those used in migraine drugs[23] could be repurposed to treat skin conditions, too.

Read more

Because of its unique national security challenges, Israel has a long history of developing highly effective, state-of-the-art defense technologies and capabilities. A prime example of Israeli military strength is the Iron Dome air defense system[1], which has been widely touted as the world’s best defense against missiles and rockets[2].

However, on Oct. 7, 2023, Israel was caught off guard by a very large-scale missile attack by the Gaza-based Palestinian militant group Hamas. The group fired several thousand missiles[3] at a number of targets across Israel, according to reports. While exact details are not available, it is clear that a significant number of the Hamas missiles penetrated the Israeli defenses, inflicting extensive damage and casualties.

I am an aerospace engineer[4] who studies space and defense systems. There is a simple reason the Israeli defense strategy was not fully effective against the Hamas attack. To understand why, you first need to understand the basics of air defense systems.

Air defense: detect, decide, disable

An air defense system consists of three key components. First, there are radars to detect, identify and track incoming missiles. The range of these radars varies. Iron Dome’s radar is effective over distances of 2.5 to 43.5 miles (4 to 70 km)[5], according to its manufacturer Raytheon. Once an object has been detected by the radar, it must be assessed to determine whether it is a threat. Information such as direction and speed are used to make this determination.

If an object is confirmed as a threat, Iron Dome operators continue to track the object by radar. Missile speeds vary considerably, but assuming a representative speed of 3,280 feet per second (1 km/s), the defense system has at most one minute to respond to an attack.

a diagram showing the trajectory of a missile along with a radar system tracking the missile and a defensive missile intercepting the attacking missile
The fundamental elements of a missile defense system. Nguyen, Dang-An et al.[6], CC BY-NC[7]

The second major element of an air defense system is the battle control center. This component determines the appropriate way to engage a confirmed threat. It uses the continually updating radar information to determine the optimal response in terms of from where to fire interceptor missiles and how many to launch against an incoming missile.

The third major component is the interceptor missile itself. For Iron Dome, it is a supersonic missile with heat-seeking sensors. These sensors provide in-flight updates to the interceptor, allowing it to steer toward and close in on the threat. The interceptor uses a proximity fuse activated by a small radar to explode close to the incoming missile so that it does not have to hit it directly to disable it.

Limits of missile defenses

Israel has at least 10 Iron Dome batteries in operation[8], each containing 60 to 80 interceptor missiles. Each of those missiles costs about US$60,000. In previous attacks involving smaller numbers of missiles and rockets, Iron Dome was 90% effective against a range of threats.

So, why was the system less effective against the recent Hamas attacks?

It is a simple question of numbers. Hamas fired several thousand missiles, and Israel had less than a thousand interceptors in the field ready to counter them. Even if Iron Dome was 100% effective against the incoming threats, the very large number of the Hamas missiles meant some were going to get through.

The Hamas attacks illustrate very clearly that even the best air defense systems can be overwhelmed if they are overmatched by the number of threats they have to counter.

How Iron Dome works.

The Israeli missile defense has been built up over many years, with high levels of financial investment. How could Hamas afford to overwhelm it? Again, it all comes down to numbers. The missiles fired by Hamas cost about $600 each, and so they are about 100 times less expensive than the Iron Dome interceptors. The total cost to Israel of firing all of its interceptors is around $48 million. If Hamas fired 5,000 missiles, the cost would be only $3 million.

Thus, in a carefully planned and executed strategy, Hamas accumulated over time a large number of relatively inexpensive missiles that it knew would overwhelm the Iron Dome defensive capabilities. Unfortunately for Israel, the Hamas attack represents a very clear example of military asymmetry: a low-cost, less-capable approach was able to defeat a more expensive, high-technology system.

Future air defense systems

The Hamas attack will have repercussions for all of the world’s major military powers. It clearly illustrates the need for air defense systems that are much more effective in two important ways. First, there is the need for a much deeper arsenal of defensive weapons that can address very large numbers of missile threats. Second, the cost per defensive weapon needs to be reduced significantly.

This episode is likely to accelerate the development and deployment of directed energy air defense systems[9] based on high-energy lasers and high-power microwaves. These devices are sometimes described as having an “infinite magazine[10],” because they have a relatively low cost per shot fired and can keep firing as long as they are supplied with electrical power.

Read more

When you hear the word comet, you might imagine a bright streak moving across the sky. You may have a family member who saw a comet before you were born, or you may have seen one yourself when comet Nishimura passed by Earth[1] in September 2023. But what are these special celestial objects made of? Where do they come from, and why do they have such long tails?

As a planetarium director[2], I spend most of my time getting people excited about and interested in space. Nothing piques people’s interest in Earth’s place in the universe quite like comets. They’re unpredictable, and they often go undetected until they get close to the Sun. I still get excited when one comes into view.

What exactly is a comet?

Comets are leftover material from the formation of the solar system. As the solar system formed about 4.5 billion years ago[3], most gas, dust, rock and metal ended up in the Sun or the planets. What did not get captured was left over as comets and asteroids[4].

Because comets are[5] clumps of rock, dust, ice and the frozen forms of various gases and molecules, they’re often called[6] “dirty snowballs” or “icy dirtballs” by astronomers. Theses clumps of ice and dirt make up what’s called the comet nucleus.

A diagram showing comet nuclei, which look like gray rocks, of progressively larger sizes.
Size comparison of various comet nuclei. NASA, ESA, Zena Levy (STScI)[7]

Outside the nucleus is a porous, almost fluffy layer of ice, kind of like a snow cone. This layer is surrounded by a dense crystalline crust[8], which forms when the comet passes near the Sun and its outer layers heat up. With a crispy outside and a fluffy inside, astronomers have compared comets to deep-fried ice cream[9].

Most comets are a few miles wide[10], and the largest known is about 85 miles[11] wide. Because they are relatively small and dark compared with other objects in the solar system, people can’t see them unless the comet gets close to the Sun.

Pin the tail on the comet

Starry sky with a comet in the mid left portion of the image and a tree in the foreground
Comet Hale-Bopp as seen from Earth in 1997. The blue ion tail is visible to the top left of the comet. Philipp Salzgeber[12], CC BY-ND[13]

As a comet moves close to the Sun, it heats up. The various frozen gases and molecules making up the comet change directly from solid ice to gas in a process called sublimation[14]. This sublimation process releases dust particles trapped under the comet’s surface.

The dust and released gas form a cloud around the comet called a coma. This gas and dust interact with the Sun to form two different tails[15].

The first tail, made up of gas, is called the ion tail[16]. The Sun’s radiation strips electrons from the gases in the coma, leaving them with a positive charge. These charged gases are called ions. Wind from the Sun then pushes these charged gas particles directly away from the Sun, forming a tail that appears blue in color. The blue color comes from large numbers of carbon monoxide[17] ions in the tail.

The dust tail forms from the dust particles released during sublimation. These are pushed away from the Sun by pressure caused by the Sun’s light[18]. The tail reflects the sunlight and swoops behind the comet as it moves, giving the comet’s tail a curve[19].

The closer a comet gets to the Sun, the longer and brighter its tail will grow. The tail can grow significantly longer than the nucleus and clock in around half a million miles long[20].

Where do comets come from?

All comets have highly eccentric orbits[21]. Their paths are elongated ovals with extreme trajectories that take them both very close to and very far from the Sun.

Comets’ orbits can be very long, meaning they may spend most of their time in far-off reaches of the solar system.

An object will orbit faster the closer it is[22] to the Sun, as angular momentum is conserved[23]. Think about how an ice skater spins faster[24] when they bring their arms in closer to their body – similarly, comets speed up when they get close to the Sun. Otherwise, comets spend most of their time moving relatively slowly through the outer reaches of the solar system.

A lot of comets likely originate in a far-out region of our solar system called the Oort cloud[25].

The Oort cloud is predicted to be a round shell of small solar system bodies[26] that surround the Earth’s solar system with an innermost boundary about 2,000 times farther from the Sun than Earth. For reference, Pluto is only about 40 times farther[27].

Sphere of small particles with a disk like structure in the middle. A tiny rectangle in the center points to a zoomed in image of the Sun and planet orbits
A NASA diagram of the Oort cloud’s structure. The term KBO refers to Kuiper Belt objects near where Pluto lies. NASA[28]

Comets from the Oort cloud take over 200 years to complete their orbits, a metric called the orbital period. Because of their long periods, they’re called long-period comets[29]. Astronomers often don’t know much about these comets until they get close to the inner solar system.

Short-period comets[30], on the other hand, have orbital periods of less than 200 years. Halley’s comet is a famous comet that comes close to the Sun every 75 years.

While that’s a long time for a human, that’s a short period for a comet. Short-period comets generally come from the Kuiper Belt[31], an asteroid belt out beyond Neptune and, most famously, the home of Pluto.

There’s a subset of short-period comets that get only to about Jupiter’s orbit at their farthest point from the Sun. These have orbital periods of less than 20 years and are called Jupiter-family comets[32].

Comets’ time in the inner solar system is relatively short, generally on the order of weeks to months[33]. As they approach the Sun, their tails grow and they brighten before fading on their way back to the outer solar system.

But even the short-period comets don’t come around often, and their porous interior means they can sometimes fall apart. All of this makes their behavior difficult to predict[34]. Astronomers can track comets when they are coming toward the inner solar system and make predictions based on observations. But they never quite know if a comet will get bright enough to be seen with the naked eye as it passes Earth, or if it will fall apart and fizzle out as it enters the inner solar system.

Either way, comets will keep people looking up at the skies for years to come.

Read more

Cancer arises when cells accumulate enough damage to change their normal behavior. The likelihood of accruing damage increases with age[1] because the safeguards in your genetic code that ensure cells function for the greater good of the body weaken over time.

Why, then, do children who haven’t had sufficient time to accumulate damage develop cancer?

I am a doctoral student[2] who is exploring the evolutionary origins of cancer. Viewed through an evolutionary lens, cancer develops from the breakdown of the cellular collaboration[3] that initially enabled cells to come together and function as one organism.

Cells in children are still learning how to collaborate. Pediatric cancer develops when rogue cells that defy cooperation emerge and grow at the body’s expense.

Adult versus pediatric cancer

The cells in your body adhere to a set of instructions defined by their genetic makeup[4] – a unique code that carries all the information that cells need to perform their specific function. When cells divide, the genetic code is copied and passed from one cell to another. Copying errors can occur in this process and contribute to the development of cancer.

In adults, cancer evolves through a gradual accrual of errors and damages in the genetic code. Although there are safeguards against uncontrolled cell growth[5] and repair mechanisms[6] to fix genetic errors, aging, exposure to environmental toxins and unhealthy lifestyle can weaken these protections and lead to the breakdown of tissues. The most common types of adult cancers, such as breast cancer[7] and lung cancer[8], often result from such accumulated damage.

In children, whose tissues are still developing, there is a dual dynamic between growth and cancer prevention. On one hand, rapidly dividing cells are organizing themselves into tissues in an environment with limited immune surveillance[9] – an ideal setting for cancer development. On the other hand, children have robust safeguards and tightly regulated mechanisms that act as counterforces against cancer and make it a rare occurrence.

Father carrying child with cancer wearing a bandana and holding a stuffed animal, talking to a health care provider
Although pediatric cancer is rare, it is a leading cause of death for children under 15 in the U.S. FatCamera/E+ via Getty Images[10]

Children seldom accumulate errors in their genetic code, and pediatric cancer patients have a much lower incidence of genetic errors[11] than adult cancer patients. However, nearly 10%[12] of pediatric cancer[13] cases in the U.S. are due to inherited genetic mutations. The most common heritable cancers arise from genetic errors that influence cell fate – that is, what a cell becomes – during the developmental stages before birth. Mistakes in embryonic cells accumulate in all subsequent cells after birth and can ultimately manifest as cancer.

Pediatric cancers can also spontaneously arise while children are growing. These are driven by genetic alterations distinct from those common in adults. Unlike in adults, where damage typically accumulates as small errors during cell division, pediatric cancers often result from large-scale rearrangements[14] of the genetic code. Different regions of the genetic code swap places, disrupting the cell’s instructions beyond repair.

Such changes frequently occur in tissues with constant turnover, such as the brain[15], muscles[16] and blood[17]. Unsurprisingly, the most prevalent[18] pediatric cancers often emerge from these tissues.

Genetic alterations are not a prerequisite for pediatric cancers. In certain pediatric brain cancers, the region of the genetic code responsible for cell specialization becomes permanently silenced[19]. Although there is no error in the genetic code itself, the cell is unable to read it. Consequently, these cells become trapped in an uncontrolled state of division, ultimately leading to cancer.

Tailoring treatments for pediatric cancer

Cells in children typically exhibit greater growth, mobility and flexibility. This means that pediatric cancer is often more invasive and aggressive[20] than that of adults, and can severely affect development even after successful therapy due to long-term damage. Because the cancer trajectories in children and adults are markedly different, treatment approaches should also be different for each.

Standard cancer therapy includes radiotherapy or chemotherapy, which affect both cancerous and healthy, actively dividing cells. If the patient becomes unresponsive to these treatments, oncologists try a different drug.

In children, the side effects of certain treatments are amplified[21] since their cells are actively growing. Unlike adult cancers, where different drugs can target different genetic errors, pediatric cancers have fewer of these targets[22]. The rarity of pediatric cancer also makes it challenging to test new therapies in large-scale clinical trials.

Standard cancer treatments can lead to lifelong effects for pediatric patients.

A common reason for treatment failure is when cancer cells adapt to evade treatment and become drug resistant[23]. Applying principles from evolutionary biology to cancer treatment can help tackle this.

For example, extinction therapy[24] is an approach to treatment inspired by natural mass extinction events. The goal of this therapy is to eradicate all cancer cells before they can evolve. It does this by applying a “first strike” drug that kills most cancer cells. The remaining few cancer cells are then targeted through focused, smaller-scale interventions.

If complete extinction is not possible, the goal turns to preventing treatment resistance and keeping the tumor from progressing. This can be achieved with adaptive therapy[25], which takes advantage of the competition for survival among cancer cells. Treatment is dynamically turned “on” and “off” to keep the tumor stable while allowing cells that are sensitive to the therapy to out-compete and suppress resistant cells. This approach preserves the tissue[26] and improves survival.

Although pediatric cancer patients have a better prognosis than adults do after treatment, cancer remains the second-leading cause of death[27] in children under 15 in the U.S. Recognizing the developmental differences between pediatric and adult cancers and using evolutionary theory[28] to “anticipate and steer[29]” the cancer’s trajectory can enhance outcomes for children. This could ultimately improve young patients’ chances for a brighter, cancer-free future.

Read more

Everyone has a different tolerance for spicy food — some love the burn, while others can’t take the heat. But the scientific consensus on whether spicy food can have an effect — positive or negative — on your health is pretty mixed.

In September 2023, a 14-year-old boy died after consuming a spicy pepper as part of the viral “one chip challenge[1].” The Paqui One Chip Challenge uses Carolina Reaper and Naga Viper peppers, which are among the hottest peppers in the world[2].

While the boy’s death is still under examination by health officials, it has gotten some of the spicy chips being used in these challenges removed from stores[3].

A cardboard display at a gas station reading 'One Chip Challenge Real Peppers Real Heat' with several bags and boxes of 'Paqui' brand chips.
Many stores have removed the Paqui One Chip Challenge chips from their shelves. AP Photo/Steve LeBlanc[4]

As an epidemiologist[5], I’m interested in how spicy food can affect people’s health and potentially worsen symptoms associated with chronic diseases like inflammatory bowel disease. I am also interested in how diet, including spicy foods, can increase or decrease a person’s lifespan.

The allure of spicy food

Spicy food can refer to food with plenty of flavor from spices, such as Asian curries, Tex-Mex dishes or Hungarian paprikash. It can also refer to foods with noticeable heat from capsaicin[6], a chemical compound found to varying degrees in hot peppers[7].

As the capsaicin content of a pepper increases, so does its ranking on the Scoville scale[8], which quantifies the sensation of being hot.

Capsaicin tastes hot because it activates certain biological pathways[9] in mammals – the same pathways activated by hot temperatures[10]. The pain produced by spicy food can provoke the body[11] to release endorphins and dopamine. This release can prompt a sense of relief or even a degree of euphoria.

In the U.S., the U.K. and elsewhere, more people than ever are consuming spicy foods[12], including extreme pepper varieties.

Hot-pepper-eating contests and similar “spicy food challenges” aren’t new, although spicy food challenges have gotten hotter – in terms of spice level and popularity on social media[13].

Hot peppers like the Carolina Reaper can induce sweating and make the consumer feel like their mouth is burning.

Short-term health effects

The short-term effects of consuming extremely spicy foods range from a pleasurable sensation of heat to an unpleasant burning sensation[14] across the lips, tongue and mouth. These foods can also cause various forms of digestive tract discomfort[15], headaches and vomiting[16].

If spicy foods are uncomfortable to eat, or cause unpleasant symptoms like migraines, abdominal pain and diarrhea, then it’s probably best to avoid those foods. Spicy food may cause these symptoms in people with inflammatory bowel diseases[17], for example.

Spicy food challenges notwithstanding, for many people across the world, consumption of spicy food is part of a long-term lifestyle influenced by geography and culture[18].

For example, hot peppers grow in hot climates, which may explain why many cultures in these climates use spicy foods[19] in their cooking. Some research suggests that spicy foods help control foodborne illnesses[20], which may also explain cultural preferences for spicy foods[21].

A plant growing several green chile peppers in a field.
Chile peppers growing in Mexico. AP Photo/Andres Leighton[22]

Lack of consensus

Nutritional epidemiologists have been studying the potential risks and benefits of long-term spicy food consumption for many years. Some of the outcomes examined[23] in relation to spicy food consumption include obesity[24], cardiovascular disease[25], cancer[26], Alzheimer’s disease[27], heartburn and ulcers[28], psychological health[29], pain sensitivity[30] and death from any cause[31] – also called all-cause mortality.

These studies report mixed results, with some outcomes like heartburn more strongly linked to spicy food consumption. As can be expected with an evolving science, some experts are more certain about some of these health effects than others.

For example, some experts state with confidence that spicy food does not cause stomach ulcers[32], whereas the association with stomach cancer[33] isn’t as clear.

When taking heart disease, cancer and all other causes of death in a study population into consideration, does eating spicy food increase or decrease the risk of early death?

Right now, the evidence from large population-based studies suggests that spicy food does not increase the risk of all-cause mortality among a population and may actually decrease the risk[34].

However, when considering the results of these studies, keep in mind that what people eat is one part of a larger set of lifestyle factors – such as physical activity, relative body weight and consumption of tobacco and alcohol – that also have health consequences.

It’s not easy for researchers to measure diet and lifestyle factors accurately in a population-based study, at least in part because people don’t always remember or report their exposure[35] accurately. It often takes numerous studies conducted over many years to reach a firm conclusion about how a dietary factor affects a certain aspect of health.

Scientists still don’t entirely know why so many people enjoy spicy foods[36] while others do not, although there is plenty of speculation[37] regarding evolutionary, cultural and geographic factors, as well as medical, biological and psychological ones[38].

One thing experts do know, however, is that humans are one of the only animals that will intentionally eat something spicy enough to cause them pain, all for the sake of pleasure[39].

Read more

Each October, the Nobel Prizes celebrate a handful of groundbreaking scientific achievements. And while many of the awarded discoveries revolutionize the field of science, some originate in unconventional places. For George de Hevesy[1], the 1943 Nobel Laureate in chemistry who discovered radioactive tracers, that place was a boarding house cafeteria in Manchester, U.K., in 1911.

A black and white headshot of a young man with a mustache wearing a suit.
Hungarian chemist George de Hevesy. Magnus Manske[2]

De Hevesey had the sneaking suspicion that the staff of the boarding house cafeteria where he ate at every day was reusing leftovers from the dinner plates – each day’s soup seemed to contain all of the prior day’s ingredients. So he came up with a plan to test his theory.

At the time, de Hevesy was working with radioactive material. He sprinkled a small amount[3] of radioactive material in his leftover meat. A few days later, he took an electroscope with him to the kitchen and measured the radioactivity[4] in the prepared food.

His landlady, who was to blame for the recycled food, exclaimed “this is magic” when de Hevesy showed her his results, but really, it was just the first successful radioactive tracer experiment.

We are[5] a team of chemists[6] and physicists who work[7] at the Facility for Rare Isotope Beams[8], located at Michigan State University. De Hevesy’s early research in the field has revolutionized the way that modern scientists like us use radioactive material, and it has led to a variety of scientific and medical advances.

The nuisance of lead

A year before conducting his recycled ingredients experiment, Hungary-born de Hevesy had traveled to the U.K.[9] to start work with nuclear scientist Ernest Rutherford[10], who’d won a Nobel Prize just two years prior.

Rutherford was at the time working with a radioactive substance[11] called radium D, a valuable byproduct of radium because of its long half-life[12] (22 years). However, Rutherford couldn’t use his radium D sample, as it had large amounts of lead mixed in.

When de Hevesy arrived, Rutherford asked him to separate the radium D[13] from the nuisance lead. The nuisance lead was made up of a combination of stable isotopes of lead (Pb). Each isotope had the same number of protons (82 for lead), but a different number of neutrons.

De Hevesy worked on separating the radium D from the natural lead using chemical separation techniques for almost two years, with no success[14]. The reason for his failure was that, unknown to anyone at the time, radium D was actually a different form of lead – namely the radioactive isotope, or radioisotope Pb-210.

Nevertheless, de Hevesy’s failure led to an even bigger discovery. The creative scientist figured out that if he could not separate radium D from natural lead, he could use it as a tracer of lead.

Radioactive isotopes[15], like Pb-210, are unstable isotopes, which means that over time they will transform into a different element. During this transformation, called radioactive decay, they typically release particles or light, which can be detected as radioactivity[16].

Through radioactivity, an unstable isotope can turn from one element to another.

This radioactivity acts as a signature indicating the presence of the radioactive isotope. This critical property of radioisotopes allows them to be used as tracers.

Radium D as a tracer

A tracer[17] is a substance that stands out in a crowd of similar material because it has unique qualities that make it easy to track.

For example, if you have a group of kindergartners going on a field trip and one of them is wearing a smartwatch, you can tell if the group went to the playground by tracking the GPS signal on the smartwatch. In de Hevesy’s case, the kindergartners were the lead atoms, the smart watch was radium D, and the GPS signal was the emitted radioactivity.

In the 1910s, the Vienna Institute of Radium Research[18] had a larger collection of radium[19] and its byproducts than any other institution. To continue his experiments with radium D, de Hevesy moved to Vienna in 1912.

He collaborated with Fritz Paneth, who had also attempted the impossible task of separating radium D from lead without success. The two scientists “spiked” samples of different chemical compounds with small amounts of a radioactive tracer. This way they could study chemical processes by tracking the movement of the radioactivity across different chemical reactions[20]

De Hevesy continued his work studying chemical processes using different isotopic markers for many years. He even was the first to introduce nonradioactive tracers. One nonradioactive tracer he studied was a heavier isotope of hydrogen, called deuterium[21]. Deuterium is 10,000 times less abundant than common hydrogen, but is roughly twice as heavy, which makes it easier to separate the two.

De Hevesy and his co-author used deuterium to track water in their bodies. In their investigations, they took turns ingesting samples and measuring the deuterium in their urine to study the elimination of water[22] from the human body.

De Hevesy was awarded the 1943 Nobel Prize in chemistry[23] “for his work on the use of isotopes as tracers in the study of chemical processes.”

Radioactive tracers today

More than a century after de Hevesy’s experiments, many fields now routinely use radioactive tracers, from medicine to materials science and biology.

These tracers can monitor the progression of disease in medical procedures[24], the uptake of nutrients in plant biology[25], the age and flow of water in aquifers[26] and the measurement of wear and corrosion of materials[27], among other applications. Radioisotopes allow researchers to follow the paths of nutrients and drugs in living systems without invasively cutting the tissue.

Four brain scans, two in contrasted colors with the background shown as white and the brain as gray, two with the background shown as black and the brain shown either as gray or orange.
Radioactive tracers, seen in the top left photo as a white spot and indicated by an arrow in the top right, are often used today in brain scans. mr. suphachai praserdumrongchai/iStock via Getty Images[28]

In modern research, scientists focus on producing new isotopes and on developing procedures to use radioactive tracers more efficiently. The Facility for Rare Isotope Beams[29], or FRIB, where the three of us work, has a program dedicated to the production and harvesting of unique radioisotopes. These radioisotopes are then used in medical and other applications.

FRIB produces radioactive beams[30] for its basic science program. In the production process, a large number of unused isotopes are collected in a tank of water, where they can be later isolated and studied[31].

Two scientists, a woman wearing a white shirt and a man wearing a dark blue shirt, squat on the concrete ground in a laboartory with lots of machinery and shelves, and a green lit ceiling.
Scientists Greg Severin and Katharina Domnanich at the Facility for Rare Isotope Beams. Facility for Rare Isotope Beams.

One recent study involved the isolation of the radioisotope Zn-62[32] from the irradiated water. This was a challenging task considering there were 100 quadrillion times more water molecules than Zn-62 atoms. Zn-62 is an important radioactive tracer utilized to follow the metabolism of zinc in plants and in nuclear medicine.

Eighty years ago, de Hevesy managed to take a dead-end separation project and turn it into a discovery that created a new scientific field. Radioactive tracers have already changed human lives in so many ways. Nevertheless, scientists are continuing to develop new radioactive tracers and find innovative ways to use them.

Read more

The 2023 Nobel Prize for chemistry isn’t the[1] first Nobel[2] awarded for[3] research in[4] nanotechnology[5]. But it is perhaps the most colorful application of the technology to be associated with the accolade.

This year’s prize recognizes Moungi Bawendi[6], Louis Brus[7] and Alexei Ekimov[8] for the discovery and development of quantum dots[9]. For many years, these precisely constructed nanometer-sized particles[10] – just a few hundred thousandths the width of a human hair in diameter – were the darlings of nanotechnology pitches and presentations. As a researcher[11] and adviser[12] on nanotechnology, I’ve even used them myself[13] when talking with developers, policymakers, advocacy groups and others about the promise and perils of the technology.

The origins of nanotechnology predate Bawendi, Brus and Ekimov’s work on quantum dots – the physicist Richard Feynman speculated on what could be possible through nanoscale engineering as early as 1959[14], and engineers like Erik Drexler were speculating about the possibilities of atomically precise manufacturing in the the 1980s[15]. However, this year’s trio of Nobel laureates were part of the earliest wave of modern nanotechnology where researchers began putting breakthroughs in material science to practical use[16].

Quantum dots brilliantly fluoresce[17]: They absorb one color of light and reemit it nearly instantaneously as another color. A vial of quantum dots, when illuminated with broad spectrum light, shines with a single vivid color. What makes them special, though, is that their color is determined by how large or small they are. Make them small and you get an intense blue. Make them larger, though still nanoscale, and the color shifts to red.

diagram of colorful circles of different sizes
The wavelength of light a quantum dot emits depends on its size. Maysinger, Ji, Hutter, Cooper[18], CC BY[19]

This property has led to many arresting images of rows of vials containing quantum dots of different sizes going from a striking blue on one end, through greens and oranges, to a vibrant red at the other. So eye-catching is this demonstration of the power of nanotechnology that, in the early 2000s, quantum dots became iconic of the strangeness and novelty of nanotechnology.

But, of course, quantum dots are more than a visually attractive parlor trick. They demonstrate that unique, controllable and useful interactions between matter and light can be achieved through engineering the physical form of matter – modifying the size, shape and structure of objects or instance – rather than playing with the chemical bonds between atoms and molecules. The distinction is an important one, and it’s at the heart of modern nanotechnology.

Skip chemical bonds, rely on quantum physics

The wavelengths of light that a material absorbs, reflects or emits are usually determined by the chemical bonds that bind its constituent atoms together. Play with the chemistry of a material[20] and it’s possible to fine-tune these bonds so that they give you the colors you want. For instance, some of the earliest dyes started with a clear substance such as analine[21], transformed through chemical reactions to the desired hue.

It’s an effective way to work with light and color, but it also leads to products that fade over time as those bonds degrade[22]. It also frequently involves using chemicals that are harmful to humans and the environment[23].

Quantum dots work differently. Rather than depending on chemical bonds to determine the wavelengths of light they absorb and emit, they rely on very small clusters of semiconducting materials[24]. It’s the quantum physics of these clusters[25] that then determines what wavelengths of light are emitted – and this in turn depends on how large or small the clusters are.

This ability to tune how a material behaves by simply changing its size is a game changer when it comes to the intensity and quality of light that quantum dots can produce, as well as their resistance to bleaching or fading, their novel uses and – if engineered smartly – their toxicity.

Of course, few materials are completely nontoxic, and quantum dots are no exception. Early quantum dots were often based on cadmium selenide for instance – the component materials of which are toxic. However, the potential toxicity of quantum dots needs to be balanced[26] by the likelihood of release and exposure and how they compare with alternatives.

people walk past colorful multi-screen display at a trade show
Quantum dots are now a normal part of many consumer items, including televisions. Soeren Stache/picture alliance via Getty Images[27]

Since its earlier days, quantum dot technology has evolved in safety and usefulness and has found its way into an increasing number of products, from displays[28] and lighting[29], to sensors[30], biomedical applications[31] and more. In the process, some of their novelty has perhaps worn off. It can be hard to remember just how much of a quantum leap the technology is that’s being used to promote the latest generation of flashy TVs[32], for instance.

And yet, quantum dots are a pivotal part of a technology transition that’s revolutionizing how people work with atoms and molecules.

‘Base coding’ on an atomic level

In my book “Films from the Future: the Technology and Morality of Sci-Fi Movies[33],” I write about the concept of “base coding[34].” The idea is simple: If people can manipulate the most basic code that defines the world we live in, we can begin to redesign and reengineer it.

This concept is intuitive when it comes to computing, where programmers use the “base code” of 1,s and 0’s, albeit through higher level languages. It also makes sense in biology, where scientists are becoming increasingly adept at reading and writing the base code of DNA and RNA – in this case, using the chemical bases adenine, guanine, cytosine and thymine as their coding language.

This ability to work with base codes also extends to the material world. Here, the code is made up of atoms and molecules and how they are arranged in ways that lead to novel properties.

Bawendi, Brus and Ekimov’s work on quantum dots is a perfect example of this form of material-world base coding. By precisely forming small clusters of particular atoms into spherical “dots,” they were able to tap into novel quantum properties that would otherwise be inaccessible. Through their work they demonstrated the transformative power that comes through coding with atoms.

alt
An example of ‘base coding’ using atoms to create a material with novel properties is a single molecule ‘nanocar’ crafted by chemists that can be controlled as it ‘drives’ over a surface. Alexis van Venrooy/Rice University[35], CC BY-ND[36]

They paved the way for increasingly sophisticated nanoscale base coding that is now leading to products and applications that would not be possible without it. And they were part of the inspiration for a nanotechnology revolution[37] that is continuing to this day. Reengineering the material world in these novel ways far transcends what can be achieved through more conventional technologies.

This possibility was captured in a 1999 U.S. National Science and Technology Council report with the title Nanotechnology: Shaping the World Atom by Atom[38]. While it doesn’t explicitly mention quantum dots – an omission that I’m sure the authors are now kicking themselves over – it did capture just how transformative the ability to engineer materials at the atomic scale could be.

This atomic-level shaping of the world is exactly what Bawendi, Brus and Ekimov aspired to through their groundbreaking work. They were some of the first materials “base coders” as they used atomically precise engineering to harness the quantum physics of small particles – and the Nobel committee’s recognition of the significance of this is well deserved.

Read more

More Articles …