One way physicists seek clues to unravel the mysteries of the universe is by smashing matter together and inspecting the debris. But these types of destructive experiments, while incredibly informative, have limits.

We are two scientists who study nuclear[1] and particle physics[2] using CERN’s Large Hadron Collider near Geneva, Switzerland. Working with an international group of nuclear and particle physicists, our team realized that hidden in the data from previous studies was a remarkable and innovative experiment.

In a new paper published in Physical Review Letters, we developed a new method with our colleagues for measuring how fast a particle called the tau wobbles[3].

Our novel approach looks at the times incoming particles in the accelerator whiz by each other rather than the times they smash together in head-on collisions. Surprisingly, this approach enables far more accurate measurements of the tau particle’s wobble than previous techniques. This is the first time in nearly 20 years scientists have measured this wobble, known as the tau magnetic moment[4], and it may help illuminate tantalizing cracks emerging in the known laws of physics[5].

A diagram showing a particle wobbling off of a vertical axis.
Electrons, muons and taus all wobble in a magnetic field like a spinning top. Measuring the wobbling speed can provide clues into quantum physics. Jesse Liu, CC BY-ND[6]

Why measure a wobble?

Electrons, the building blocks of atoms, have two heavier cousins called the muon and the tau[7]. Taus are the heaviest in this family of three and the most mysterious, as they exist only for minuscule amounts of time.

Interestingly, when you place an electron, muon or tau inside a magnetic field, these particles wobble in a manner similar to how a spinning top wobbles on a table. This wobble is called a particle’s magnetic moment. It is possible to predict how fast these particles should wobble using the Standard Model of particle physics[8] – scientists’ best theory of how particles interact.

Since the 1940s, physicists have been interested in measuring magnetic moments to reveal intriguing effects in the quantum world[9]. According to quantum physics, clouds of particles and antiparticles are constantly popping in and out of existence[10]. These fleeting fluctuations slightly alter how fast electrons, muons and taus wobble inside a magnetic field. By measuring this wobble very precisely, physicists can peer into this cloud to uncover possible hints of undiscovered particles.

A chart showing the basic particles.
Electrons, muons and taus are three closely related particles in the Standard Model of particle physics – scientists’ current best description of the fundamental laws of nature. MissMJ, Cush/Wikimedia Commons[11]

Testing electrons, muons and taus

In 1948, theoretical physicist Julian Schwinger first calculated how the quantum cloud alters the electron’s magnetic moment[12]. Since then, experimental physicists have measured the speed of the electron’s wobble to an extraordinary 13 decimal places[13].

The heavier the particle, the more its wobble will change because of undiscovered new particles lurking in its quantum cloud. Since electrons are so light, this limits their sensitivity to new particles.

Muons and taus are much heavier but also far shorter-lived than electrons. While muons exist only for mere microseconds, scientists at Fermilab near Chicago measured the muon’s magnetic moment to 10 decimal places[14] in 2021. They found that muons wobbled noticeably faster than Standard Model predictions, suggesting unknown particles may be appearing in the muon’s quantum cloud.

Taus are the heaviest particle of the family – 17 times more massive than a muon and 3,500 times heavier than an electron. This makes them much more sensitive to potentially undiscovered particles[15] in the quantum clouds. But taus are also the hardest to see, since they live for just a millionth of the time a muon exists.

To date, the best measurement of the tau’s magnetic moment was made in 2004 using a now-retired electron collider[16] at CERN. Though an incredible scientific feat, after multiple years of collecting data that experiment could measure the speed of the tau’s wobble to only two decimal places[17]. Unfortunately, to test the Standard Model, physicists would need a measurement 10 times as precise[18].

Diagram showing two particles nearly colliding.
Instead of colliding two nuclei head-on to create tau particles, two lead ions can whiz past each other in a near miss and still produce taus. Jesse Liu, CC BY-ND[19]

Lead ions for near-miss physics

Since the 2004 measurement of the tau’s magenetic moment, physicists have been seeking new ways to measure the tau wobble.

The Large Hadron Collider usually smashes the nuclei of two atoms together – that is why it is called a collider. These head-on collisions create a fireworks display of debris[20] that can include taus, but the noisy conditions preclude careful measurements of the tau’s magnetic moment.

From 2015 to 2018, there was an experiment at CERN that was designed primarily to allow nuclear physicists to study exotic hot matter[21] created in head-on collisions. The particles used in this experiment were lead nuclei that had been stripped of their electrons – called lead ions. Lead ions are electrically charged and produce strong electromagnetic fields[22].

The electromagnetic fields of lead ions contain particles of light called photons. When two lead ions collide, their photons can also collide and convert all their energy into a single pair of particles. It was these photon collisions that scientists used to measure muons[23].

These lead ion experiments ended in 2018, but it wasn’t until 2019 that one of us, Jesse Liu, teamed up with particle physicist Lydia Beresford in Oxford, England, and realized the data from the same lead ion experiments could potentially be used to do something new: measure the tau’s magnetic moment.

This discovery was a total surprise[24]. It goes like this: Lead ions are so small that they often miss each other in collision experiments. But occasionally, the ions pass very close to each other without touching. When this happens, their accompanying photons can still smash together while the ions continue flying on their merry way.

These photon collisions can create a variety of particles – like the muons in the previous experiment, and also taus. But without the chaotic fireworks produced by head-on collisions, these near-miss events are far quieter and ideal for measuring traits of the elusive tau.

Much to our excitement, when the team looked back at data from 2018, indeed these lead ion near misses were creating tau particles. There was a new experiment hidden in plain sight!

A long tube in an underground tunnel.
The Large Hadron Collider accelerates particles to incredibly high speeds before trying to smash particles together, but not all attempts result in successful collisions. Maximilien Brice/CERN[25], CC BY-SA[26]

First measurement of tau wobble in two decades

In April 2022, the CERN team announced that we had found direct evidence of tau particles created[27] during lead ion near misses. Using that data, the team was also able to measure the tau magnetic moment – the first time such a measurement had been done since 2004. The final results were published on Oct. 12, 2023.

This landmark result measured the tau wobble to two decimal places. Much to our astonishment, this method tied the previous best measurement using only one month of data recorded in 2018.

After no experimental progress for nearly 20 years, this result opens an entirely new and important path toward the tenfold improvement in precision needed to test Standard Model predictions. Excitingly, more data is on the horizon.

The Large Hadron Collider just restarted lead ion data collection on Sept. 28, 2023[28], after routine maintenance and upgrades. Our team plans to quadruple the sample size of lead ion near-miss data by 2025. This increase in data will double the accuracy of the measurement of the tau magnetic moment, and improvements to analysis methods may go even further.

Tau particles are one of physicists’ best windows to the enigmatic quantum world, and we are excited for surprises that upcoming results may reveal about the fundamental nature of the universe.

Read more

Curious Kids[1] is a series for children of all ages. If you have a question you’d like an expert to answer, send it to This email address is being protected from spambots. You need JavaScript enabled to view it.[2]. Why is space so dark despite all of the stars in the universe? – Nikhil, age 15, New Delhi People have been asking why space is dark despite being filled with stars for so long that this question has a special name – Olbers’ paradox[3].Astronomers estimate that there are about 200 billion trillion stars[4] in the observable universe. And many of those stars are as bright or even brighter than our sun. So, why isn’t space filled with dazzling light?I am an astronomer[5] who studies stars and planets – including those outside our solar system – and their motion in space. The study of distant stars and planets helps astronomers like me[6] understand why space is so dark.You might guess it’s because a lot of the stars in the universe are very far away from Earth. Of course, it is true that the farther away a star is, the less bright it looks – a star 10 times farther away looks 100 times dimmer[7]. But it turns out this isn’t the whole answer. Imagine a bubblePretend, for a moment, that the universe is so old that the light from even the farthest stars has had time to reach Earth. In this imaginary scenario, all of the stars in the universe are not moving at all.Picture a large bubble with Earth at the center. If the bubble were about 10 light years[8] across, it would contain about a dozen stars[9]. Of course, at several light years away, many of those stars would look pretty dim from Earth. If you keep enlarging the bubble to 1,000 light years across, then to 1 million light years, and then 1 billion light years, the farthest stars in the bubble will look even more faint. But there would also be more and more stars inside the bigger and bigger bubble, all of them contributing light. Even though the farthest stars look dimmer and dimmer, there would be a lot more of them, and the whole night sky should look very bright.It seems I’m back where I started, but I’m actually a little closer to the answer.Age mattersIn the imaginary bubble illustration, I asked you to imagine that the stars are not moving and that the universe is very old. But the universe is only about 13 billion years old[10].
Image of lightly colored galaxies and stars against dark background
Galaxies as they appeared approximately 13.1 billion years ago, taken by the James Webb Space Telescope. NASA/ESA/CSA/STScI/Handout from Xinhua News Agency via Getty Images[11] Even though that’s an amazingly long time in human terms, it’s short in astronomical terms. It’s short enough that the light from stars more distant than about 13 billion light years hasn’t actually reached Earth yet. And so the actual bubble around Earth that contains all the stars we can see only extends out to about 13 billion light years from Earth[12].There just are not enough stars in the bubble to fill every line of sight. Of course, if you look in some directions in the sky, you can see stars. If you look at other bits of the sky, you can’t see any stars. And that’s because, in those dark spots, the stars that could block your line of sight are so far away their light hasn’t reached Earth yet. As time passes, light from these more and more distant stars will have time to reach us. The Doppler shiftYou might ask whether the night sky will eventually light up completely. But that brings me back to the other thing I told you to imagine: that all of the stars are not moving. The universe is actually expanding, with the most distant galaxies moving away from Earth at nearly the speed of light[13]. Because the galaxies are moving away so fast, the light from their stars is pushed into colors the human eye can’t see. This effect is called the Doppler shift[14]. So, even if it had enough time to reach you, you still couldn’t see[15] the light from the most distant stars with your eyes. And the night sky would not be completely lit up.
The Doppler shift, also known as the redshift, is a phenomenon in which light from objects that are moving away from an observer appears more toward the red end of the spectrum.
If you wait even longer, eventually the stars will all burn out – stars like the sun last only about 10 billion years[16]. Astronomers hypothesize that in the distant future – a thousand trillion years from now – the universe will go dark, inhabited by only stellar remnants[17] like white dwarfs and black holes.Even though our night sky isn’t completely filled with stars, we live in a very special time in the universe’s life, when we’re lucky enough to enjoy a rich and complex night sky, filled with light and dark.Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to This email address is being protected from spambots. You need JavaScript enabled to view it.[18]. Please tell us your name, age and the city where you live.And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

Read more

Itching can be uncomfortable, but it’s a normal part of your skin’s immune response to external threats. When you’re itching from an encounter with poison ivy or mosquitoes, consider that your urge to scratch may have evolved to get you to swat away disease-carrying pests[1].

However, for many people who suffer from chronic skin diseases like eczema, the sensation of itch can fuel a vicious cycle[2] of scratching that interrupts sleep, reduces productivity and prevents them from enjoying daily life[3]. This cycle is caused by sensory neurons and skin immune cells[4] working together to promote itching and skin inflammation.

But, paradoxically, some of the mechanisms behind this feedback loop also stop inflammation from getting worse. In our newly published research, my team of immunologists and neuroscientists and I[5] discovered that a specific type of itch-sensing neuron can push back on the itch-scratch-inflammation cycle[6] in the presence of a small protein. This protein, called interleukin-31, or IL-31[7], is typically involved in triggering itching.

This negative feedback loop – like the vicious cycle – is only possible because the itch-sensing nerve endings in your skin are closely intertwined with the millions of cells that make up your skin’s immune system[8].

Your skin has its own immune system.

An itchy molecule

The protein IL-31 is key to the connection between the nervous and immune systems. This molecule is produced by some immune cells[9], and like other members of this molecule family[10], it specializes in helping immune cells communicate with each other.

IL-31 is rarely present in the skin or blood of people who don’t have a history of eczema, allergies, asthma or related conditions. But those with conditions like eczema that cause chronic itch have significantly increased skin production of IL-31[11]. There is strong evidence that IL-31 is one of a small set of proteins that immune cells produce that can bind directly to sensory neurons and trigger itching[12]. Small amounts of purified IL-31 injected directly into skin or spinal fluid leads to impressively rapid-onset itching and scratching[13].

However, when my colleagues and I induced rashes in mice by exposing them to dust mites, we found that itch-sensing neurons turned down the dial on inflammation at the site of itching instead of promoting it. They did so by secreting small molecules called neuropeptides[14] that, in this context, directed immune cells to respond less enthusiastically. In sum, we had discovered an inverse relationship between itching and skin inflammation, tethered by a single molecule.

But if IL-31 triggers itching, which can worsen inflammation by making patients scratch their skin, how does it reduce inflammation?

We found the answer to this paradox in a little-known function of sensory neurons called neurogenic inflammation[15]. This nerve reflex triggers sensory neurons to release various signaling molecules directly into tissues, including specific neuropeptides that promote signs of inflammation[16] like increased blood flow to the skin. Neurogenic inflammation acts within the same nerves that transmit sensory information like itch, pain, touch and temperature, but differs by the path it takes: away from the brain rather than toward it.

We discovered that IL-31 can induce neurogenic inflammation, mapping a direct pathway[17] going from IL-31 through sensory neurons to repress immune cells in the skin. When we engineered mice to be unresponsive to IL-31, we similarly found that they had more activated skin immune cells that produced more inflammation. This means the net effect of IL-31 is to blunt overall inflammation.

Profile of child with pacifier in mouth and eczema rash on throat
Eczema’s vicious cycle of itch-scratch-inflammation can significantly affect a patient’s quality of life. SBenitez/Moment via Getty Images[18]

IL-31 as potential treatment

Our study shows that IL-31 causes sensory neurons in the skin to perform two very different functions[19]: They signal inward to the spinal cord and brain to stimulate an itching sensation that typically leads to more inflammation, but they also signal back out to the skin and quell inflammation by inhibiting certain immune cells.

Although paradoxical, this makes evolutionary sense. Scratching an itch can feel very satisfying but doesn’t have much utility in the modern world where we’re more likely to suffer from compulsive scratching than encounter stinging nettles. In contrast, unchecked inflammation underlies many chronic autoimmune diseases. Therefore, turning off an immune response in inflamed tissue can be as important as turning it on.

Our discoveries raise important questions about the implications of modifying IL-31 to treat different diseases. For one, it isn’t clear how IL-31-sensing neurons interface with other neuronal circuits[20] that also regulate skin inflammation. Furthermore, some patients have higher levels of allergic proteins[21] in their blood or develop asthma flares[22] when taking existing drugs that target IL-31. IL-31 is also found in some lung and gut cells – how and why would an itch-inducing molecule be present in internal organs?

Anatomical niches where sensory neurons and immune cells converge are present throughout the human body. If an itchy molecule like IL-31 can use neuronal circuitry to dampen an immune response in the skin, similar molecules like those used in migraine drugs[23] could be repurposed to treat skin conditions, too.

Read more

Because of its unique national security challenges, Israel has a long history of developing highly effective, state-of-the-art defense technologies and capabilities. A prime example of Israeli military strength is the Iron Dome air defense system[1], which has been widely touted as the world’s best defense against missiles and rockets[2].

However, on Oct. 7, 2023, Israel was caught off guard by a very large-scale missile attack by the Gaza-based Palestinian militant group Hamas. The group fired several thousand missiles[3] at a number of targets across Israel, according to reports. While exact details are not available, it is clear that a significant number of the Hamas missiles penetrated the Israeli defenses, inflicting extensive damage and casualties.

I am an aerospace engineer[4] who studies space and defense systems. There is a simple reason the Israeli defense strategy was not fully effective against the Hamas attack. To understand why, you first need to understand the basics of air defense systems.

Air defense: detect, decide, disable

An air defense system consists of three key components. First, there are radars to detect, identify and track incoming missiles. The range of these radars varies. Iron Dome’s radar is effective over distances of 2.5 to 43.5 miles (4 to 70 km)[5], according to its manufacturer Raytheon. Once an object has been detected by the radar, it must be assessed to determine whether it is a threat. Information such as direction and speed are used to make this determination.

If an object is confirmed as a threat, Iron Dome operators continue to track the object by radar. Missile speeds vary considerably, but assuming a representative speed of 3,280 feet per second (1 km/s), the defense system has at most one minute to respond to an attack.

a diagram showing the trajectory of a missile along with a radar system tracking the missile and a defensive missile intercepting the attacking missile
The fundamental elements of a missile defense system. Nguyen, Dang-An et al.[6], CC BY-NC[7]

The second major element of an air defense system is the battle control center. This component determines the appropriate way to engage a confirmed threat. It uses the continually updating radar information to determine the optimal response in terms of from where to fire interceptor missiles and how many to launch against an incoming missile.

The third major component is the interceptor missile itself. For Iron Dome, it is a supersonic missile with heat-seeking sensors. These sensors provide in-flight updates to the interceptor, allowing it to steer toward and close in on the threat. The interceptor uses a proximity fuse activated by a small radar to explode close to the incoming missile so that it does not have to hit it directly to disable it.

Limits of missile defenses

Israel has at least 10 Iron Dome batteries in operation[8], each containing 60 to 80 interceptor missiles. Each of those missiles costs about US$60,000. In previous attacks involving smaller numbers of missiles and rockets, Iron Dome was 90% effective against a range of threats.

So, why was the system less effective against the recent Hamas attacks?

It is a simple question of numbers. Hamas fired several thousand missiles, and Israel had less than a thousand interceptors in the field ready to counter them. Even if Iron Dome was 100% effective against the incoming threats, the very large number of the Hamas missiles meant some were going to get through.

The Hamas attacks illustrate very clearly that even the best air defense systems can be overwhelmed if they are overmatched by the number of threats they have to counter.

How Iron Dome works.

The Israeli missile defense has been built up over many years, with high levels of financial investment. How could Hamas afford to overwhelm it? Again, it all comes down to numbers. The missiles fired by Hamas cost about $600 each, and so they are about 100 times less expensive than the Iron Dome interceptors. The total cost to Israel of firing all of its interceptors is around $48 million. If Hamas fired 5,000 missiles, the cost would be only $3 million.

Thus, in a carefully planned and executed strategy, Hamas accumulated over time a large number of relatively inexpensive missiles that it knew would overwhelm the Iron Dome defensive capabilities. Unfortunately for Israel, the Hamas attack represents a very clear example of military asymmetry: a low-cost, less-capable approach was able to defeat a more expensive, high-technology system.

Future air defense systems

The Hamas attack will have repercussions for all of the world’s major military powers. It clearly illustrates the need for air defense systems that are much more effective in two important ways. First, there is the need for a much deeper arsenal of defensive weapons that can address very large numbers of missile threats. Second, the cost per defensive weapon needs to be reduced significantly.

This episode is likely to accelerate the development and deployment of directed energy air defense systems[9] based on high-energy lasers and high-power microwaves. These devices are sometimes described as having an “infinite magazine[10],” because they have a relatively low cost per shot fired and can keep firing as long as they are supplied with electrical power.

Read more

When you hear the word comet, you might imagine a bright streak moving across the sky. You may have a family member who saw a comet before you were born, or you may have seen one yourself when comet Nishimura passed by Earth[1] in September 2023. But what are these special celestial objects made of? Where do they come from, and why do they have such long tails?

As a planetarium director[2], I spend most of my time getting people excited about and interested in space. Nothing piques people’s interest in Earth’s place in the universe quite like comets. They’re unpredictable, and they often go undetected until they get close to the Sun. I still get excited when one comes into view.

What exactly is a comet?

Comets are leftover material from the formation of the solar system. As the solar system formed about 4.5 billion years ago[3], most gas, dust, rock and metal ended up in the Sun or the planets. What did not get captured was left over as comets and asteroids[4].

Because comets are[5] clumps of rock, dust, ice and the frozen forms of various gases and molecules, they’re often called[6] “dirty snowballs” or “icy dirtballs” by astronomers. Theses clumps of ice and dirt make up what’s called the comet nucleus.

A diagram showing comet nuclei, which look like gray rocks, of progressively larger sizes.
Size comparison of various comet nuclei. NASA, ESA, Zena Levy (STScI)[7]

Outside the nucleus is a porous, almost fluffy layer of ice, kind of like a snow cone. This layer is surrounded by a dense crystalline crust[8], which forms when the comet passes near the Sun and its outer layers heat up. With a crispy outside and a fluffy inside, astronomers have compared comets to deep-fried ice cream[9].

Most comets are a few miles wide[10], and the largest known is about 85 miles[11] wide. Because they are relatively small and dark compared with other objects in the solar system, people can’t see them unless the comet gets close to the Sun.

Pin the tail on the comet

Starry sky with a comet in the mid left portion of the image and a tree in the foreground
Comet Hale-Bopp as seen from Earth in 1997. The blue ion tail is visible to the top left of the comet. Philipp Salzgeber[12], CC BY-ND[13]

As a comet moves close to the Sun, it heats up. The various frozen gases and molecules making up the comet change directly from solid ice to gas in a process called sublimation[14]. This sublimation process releases dust particles trapped under the comet’s surface.

The dust and released gas form a cloud around the comet called a coma. This gas and dust interact with the Sun to form two different tails[15].

The first tail, made up of gas, is called the ion tail[16]. The Sun’s radiation strips electrons from the gases in the coma, leaving them with a positive charge. These charged gases are called ions. Wind from the Sun then pushes these charged gas particles directly away from the Sun, forming a tail that appears blue in color. The blue color comes from large numbers of carbon monoxide[17] ions in the tail.

The dust tail forms from the dust particles released during sublimation. These are pushed away from the Sun by pressure caused by the Sun’s light[18]. The tail reflects the sunlight and swoops behind the comet as it moves, giving the comet’s tail a curve[19].

The closer a comet gets to the Sun, the longer and brighter its tail will grow. The tail can grow significantly longer than the nucleus and clock in around half a million miles long[20].

Where do comets come from?

All comets have highly eccentric orbits[21]. Their paths are elongated ovals with extreme trajectories that take them both very close to and very far from the Sun.

Comets’ orbits can be very long, meaning they may spend most of their time in far-off reaches of the solar system.

An object will orbit faster the closer it is[22] to the Sun, as angular momentum is conserved[23]. Think about how an ice skater spins faster[24] when they bring their arms in closer to their body – similarly, comets speed up when they get close to the Sun. Otherwise, comets spend most of their time moving relatively slowly through the outer reaches of the solar system.

A lot of comets likely originate in a far-out region of our solar system called the Oort cloud[25].

The Oort cloud is predicted to be a round shell of small solar system bodies[26] that surround the Earth’s solar system with an innermost boundary about 2,000 times farther from the Sun than Earth. For reference, Pluto is only about 40 times farther[27].

Sphere of small particles with a disk like structure in the middle. A tiny rectangle in the center points to a zoomed in image of the Sun and planet orbits
A NASA diagram of the Oort cloud’s structure. The term KBO refers to Kuiper Belt objects near where Pluto lies. NASA[28]

Comets from the Oort cloud take over 200 years to complete their orbits, a metric called the orbital period. Because of their long periods, they’re called long-period comets[29]. Astronomers often don’t know much about these comets until they get close to the inner solar system.

Short-period comets[30], on the other hand, have orbital periods of less than 200 years. Halley’s comet is a famous comet that comes close to the Sun every 75 years.

While that’s a long time for a human, that’s a short period for a comet. Short-period comets generally come from the Kuiper Belt[31], an asteroid belt out beyond Neptune and, most famously, the home of Pluto.

There’s a subset of short-period comets that get only to about Jupiter’s orbit at their farthest point from the Sun. These have orbital periods of less than 20 years and are called Jupiter-family comets[32].

Comets’ time in the inner solar system is relatively short, generally on the order of weeks to months[33]. As they approach the Sun, their tails grow and they brighten before fading on their way back to the outer solar system.

But even the short-period comets don’t come around often, and their porous interior means they can sometimes fall apart. All of this makes their behavior difficult to predict[34]. Astronomers can track comets when they are coming toward the inner solar system and make predictions based on observations. But they never quite know if a comet will get bright enough to be seen with the naked eye as it passes Earth, or if it will fall apart and fizzle out as it enters the inner solar system.

Either way, comets will keep people looking up at the skies for years to come.

Read more

Cancer arises when cells accumulate enough damage to change their normal behavior. The likelihood of accruing damage increases with age[1] because the safeguards in your genetic code that ensure cells function for the greater good of the body weaken over time.

Why, then, do children who haven’t had sufficient time to accumulate damage develop cancer?

I am a doctoral student[2] who is exploring the evolutionary origins of cancer. Viewed through an evolutionary lens, cancer develops from the breakdown of the cellular collaboration[3] that initially enabled cells to come together and function as one organism.

Cells in children are still learning how to collaborate. Pediatric cancer develops when rogue cells that defy cooperation emerge and grow at the body’s expense.

Adult versus pediatric cancer

The cells in your body adhere to a set of instructions defined by their genetic makeup[4] – a unique code that carries all the information that cells need to perform their specific function. When cells divide, the genetic code is copied and passed from one cell to another. Copying errors can occur in this process and contribute to the development of cancer.

In adults, cancer evolves through a gradual accrual of errors and damages in the genetic code. Although there are safeguards against uncontrolled cell growth[5] and repair mechanisms[6] to fix genetic errors, aging, exposure to environmental toxins and unhealthy lifestyle can weaken these protections and lead to the breakdown of tissues. The most common types of adult cancers, such as breast cancer[7] and lung cancer[8], often result from such accumulated damage.

In children, whose tissues are still developing, there is a dual dynamic between growth and cancer prevention. On one hand, rapidly dividing cells are organizing themselves into tissues in an environment with limited immune surveillance[9] – an ideal setting for cancer development. On the other hand, children have robust safeguards and tightly regulated mechanisms that act as counterforces against cancer and make it a rare occurrence.

Father carrying child with cancer wearing a bandana and holding a stuffed animal, talking to a health care provider
Although pediatric cancer is rare, it is a leading cause of death for children under 15 in the U.S. FatCamera/E+ via Getty Images[10]

Children seldom accumulate errors in their genetic code, and pediatric cancer patients have a much lower incidence of genetic errors[11] than adult cancer patients. However, nearly 10%[12] of pediatric cancer[13] cases in the U.S. are due to inherited genetic mutations. The most common heritable cancers arise from genetic errors that influence cell fate – that is, what a cell becomes – during the developmental stages before birth. Mistakes in embryonic cells accumulate in all subsequent cells after birth and can ultimately manifest as cancer.

Pediatric cancers can also spontaneously arise while children are growing. These are driven by genetic alterations distinct from those common in adults. Unlike in adults, where damage typically accumulates as small errors during cell division, pediatric cancers often result from large-scale rearrangements[14] of the genetic code. Different regions of the genetic code swap places, disrupting the cell’s instructions beyond repair.

Such changes frequently occur in tissues with constant turnover, such as the brain[15], muscles[16] and blood[17]. Unsurprisingly, the most prevalent[18] pediatric cancers often emerge from these tissues.

Genetic alterations are not a prerequisite for pediatric cancers. In certain pediatric brain cancers, the region of the genetic code responsible for cell specialization becomes permanently silenced[19]. Although there is no error in the genetic code itself, the cell is unable to read it. Consequently, these cells become trapped in an uncontrolled state of division, ultimately leading to cancer.

Tailoring treatments for pediatric cancer

Cells in children typically exhibit greater growth, mobility and flexibility. This means that pediatric cancer is often more invasive and aggressive[20] than that of adults, and can severely affect development even after successful therapy due to long-term damage. Because the cancer trajectories in children and adults are markedly different, treatment approaches should also be different for each.

Standard cancer therapy includes radiotherapy or chemotherapy, which affect both cancerous and healthy, actively dividing cells. If the patient becomes unresponsive to these treatments, oncologists try a different drug.

In children, the side effects of certain treatments are amplified[21] since their cells are actively growing. Unlike adult cancers, where different drugs can target different genetic errors, pediatric cancers have fewer of these targets[22]. The rarity of pediatric cancer also makes it challenging to test new therapies in large-scale clinical trials.

Standard cancer treatments can lead to lifelong effects for pediatric patients.

A common reason for treatment failure is when cancer cells adapt to evade treatment and become drug resistant[23]. Applying principles from evolutionary biology to cancer treatment can help tackle this.

For example, extinction therapy[24] is an approach to treatment inspired by natural mass extinction events. The goal of this therapy is to eradicate all cancer cells before they can evolve. It does this by applying a “first strike” drug that kills most cancer cells. The remaining few cancer cells are then targeted through focused, smaller-scale interventions.

If complete extinction is not possible, the goal turns to preventing treatment resistance and keeping the tumor from progressing. This can be achieved with adaptive therapy[25], which takes advantage of the competition for survival among cancer cells. Treatment is dynamically turned “on” and “off” to keep the tumor stable while allowing cells that are sensitive to the therapy to out-compete and suppress resistant cells. This approach preserves the tissue[26] and improves survival.

Although pediatric cancer patients have a better prognosis than adults do after treatment, cancer remains the second-leading cause of death[27] in children under 15 in the U.S. Recognizing the developmental differences between pediatric and adult cancers and using evolutionary theory[28] to “anticipate and steer[29]” the cancer’s trajectory can enhance outcomes for children. This could ultimately improve young patients’ chances for a brighter, cancer-free future.

Read more

Everyone has a different tolerance for spicy food — some love the burn, while others can’t take the heat. But the scientific consensus on whether spicy food can have an effect — positive or negative — on your health is pretty mixed.

In September 2023, a 14-year-old boy died after consuming a spicy pepper as part of the viral “one chip challenge[1].” The Paqui One Chip Challenge uses Carolina Reaper and Naga Viper peppers, which are among the hottest peppers in the world[2].

While the boy’s death is still under examination by health officials, it has gotten some of the spicy chips being used in these challenges removed from stores[3].

A cardboard display at a gas station reading 'One Chip Challenge Real Peppers Real Heat' with several bags and boxes of 'Paqui' brand chips.
Many stores have removed the Paqui One Chip Challenge chips from their shelves. AP Photo/Steve LeBlanc[4]

As an epidemiologist[5], I’m interested in how spicy food can affect people’s health and potentially worsen symptoms associated with chronic diseases like inflammatory bowel disease. I am also interested in how diet, including spicy foods, can increase or decrease a person’s lifespan.

The allure of spicy food

Spicy food can refer to food with plenty of flavor from spices, such as Asian curries, Tex-Mex dishes or Hungarian paprikash. It can also refer to foods with noticeable heat from capsaicin[6], a chemical compound found to varying degrees in hot peppers[7].

As the capsaicin content of a pepper increases, so does its ranking on the Scoville scale[8], which quantifies the sensation of being hot.

Capsaicin tastes hot because it activates certain biological pathways[9] in mammals – the same pathways activated by hot temperatures[10]. The pain produced by spicy food can provoke the body[11] to release endorphins and dopamine. This release can prompt a sense of relief or even a degree of euphoria.

In the U.S., the U.K. and elsewhere, more people than ever are consuming spicy foods[12], including extreme pepper varieties.

Hot-pepper-eating contests and similar “spicy food challenges” aren’t new, although spicy food challenges have gotten hotter – in terms of spice level and popularity on social media[13].

Hot peppers like the Carolina Reaper can induce sweating and make the consumer feel like their mouth is burning.

Short-term health effects

The short-term effects of consuming extremely spicy foods range from a pleasurable sensation of heat to an unpleasant burning sensation[14] across the lips, tongue and mouth. These foods can also cause various forms of digestive tract discomfort[15], headaches and vomiting[16].

If spicy foods are uncomfortable to eat, or cause unpleasant symptoms like migraines, abdominal pain and diarrhea, then it’s probably best to avoid those foods. Spicy food may cause these symptoms in people with inflammatory bowel diseases[17], for example.

Spicy food challenges notwithstanding, for many people across the world, consumption of spicy food is part of a long-term lifestyle influenced by geography and culture[18].

For example, hot peppers grow in hot climates, which may explain why many cultures in these climates use spicy foods[19] in their cooking. Some research suggests that spicy foods help control foodborne illnesses[20], which may also explain cultural preferences for spicy foods[21].

A plant growing several green chile peppers in a field.
Chile peppers growing in Mexico. AP Photo/Andres Leighton[22]

Lack of consensus

Nutritional epidemiologists have been studying the potential risks and benefits of long-term spicy food consumption for many years. Some of the outcomes examined[23] in relation to spicy food consumption include obesity[24], cardiovascular disease[25], cancer[26], Alzheimer’s disease[27], heartburn and ulcers[28], psychological health[29], pain sensitivity[30] and death from any cause[31] – also called all-cause mortality.

These studies report mixed results, with some outcomes like heartburn more strongly linked to spicy food consumption. As can be expected with an evolving science, some experts are more certain about some of these health effects than others.

For example, some experts state with confidence that spicy food does not cause stomach ulcers[32], whereas the association with stomach cancer[33] isn’t as clear.

When taking heart disease, cancer and all other causes of death in a study population into consideration, does eating spicy food increase or decrease the risk of early death?

Right now, the evidence from large population-based studies suggests that spicy food does not increase the risk of all-cause mortality among a population and may actually decrease the risk[34].

However, when considering the results of these studies, keep in mind that what people eat is one part of a larger set of lifestyle factors – such as physical activity, relative body weight and consumption of tobacco and alcohol – that also have health consequences.

It’s not easy for researchers to measure diet and lifestyle factors accurately in a population-based study, at least in part because people don’t always remember or report their exposure[35] accurately. It often takes numerous studies conducted over many years to reach a firm conclusion about how a dietary factor affects a certain aspect of health.

Scientists still don’t entirely know why so many people enjoy spicy foods[36] while others do not, although there is plenty of speculation[37] regarding evolutionary, cultural and geographic factors, as well as medical, biological and psychological ones[38].

One thing experts do know, however, is that humans are one of the only animals that will intentionally eat something spicy enough to cause them pain, all for the sake of pleasure[39].

Read more

More Articles …