The Power of Truth® has been released for sale and assignment to a conservative pro-American news outlet, cable network, or other media outlet that wants to define and brand its operation as the bearer of the truth, and set itself above the competition.

In every news story the audience hears of censorship, speech, and the truth. The Power of Truth® has significant value to define an outlet, and expand its audience. A growing media outlet may decide to rebrand their operation The Power of Truth®. An established outlet may choose to make it the slogan distinguishing their operation from the competition. You want people to think of your outlet when they hear it, and think of the slogan when they see your company name. It is the thing which answers the consumer's questions: Why should I choose you? Why should I listen to you? Think:

  • What’s in your wallet -- Capital One
  • The most trusted name in news – CNN
  • Fair and balanced - Fox News
  • Where’s the beef -- Wendy’s
  • You’re in good hands -- Allstate
  • The ultimate driving machine -- BMW

The Power of Truth® is registered at the federal trademark level in all applicable trademark classes, and the sale and assignment includes the applicable domain names. The buyer will have both the trademark and the domains so that it will control its business landscape without downrange interference.

Contact: Truth@ThePowerOfTruth.com

Disasters can happen anywhere.

Some places are more prone to hazards such as earthquakes, flooding and hurricanes, but there’s nowhere where the risk is zero. The good news is that humans can make good decisions to lower the odds of such hazards turning into disasters. Technology can help determine where to make investments to save the most lives.

The terrible devastation caused by a 6.8 magnitude earthquake[1] in Morocco on Sept. 8, 2023, is the result of the presence of centuries-old historic buildings and the continued use of old construction methods[2] such as clay bricks and unreinforced masonry. These building materials are prevalent worldwide[3], particularly in developing countries[4].

Engineers like me[5] tend to focus on tangible decisions related to how buildings are constructed – for example, the amount and location of steel reinforcement. Over the last several decades, I’ve conducted the world’s largest shake table tests[6], placing a full-size apartment building on a platform that simulates seismic activity, and I’ve led teams of experts to investigate earthquakes, hurricanes, tornadoes and floods – but I’ve never become used to devastation like we are seeing in Morocco now.

As we are reminded by each disaster, mitigation is needed to make our homes, offices and schools safer and more resilient to earthquakes. Retrofitting buildings is expensive – and that cost represents a daunting challenge for developing nations like Morocco and Syria[7], as well as developed nations like Turkey – all three of which were devastated recently by major earthquakes.

And yet, I am optimistic because I know thousands of engineers around the world are working and collaborating to make earthquakes less deadly.

A group of people walk by buildings devastated by the earthquake.
The Morocco earthquake damaged thousands of homes and buildings, including many of the country’s long-standing historical landmarks. Wang Dongzhen/Xinhua News Agency via Getty Images[8]

How earthquakes devastate buildings

Before we can discuss how to make people safer in earthquakes, it helps to understand the forces at work during these destructive events.

The extent of the damage done by an earthquake is determined by several factors, including magnitude – or how much energy the earthquake releases from the fault – depth of the fault[9] and how far the building is from the epicenter of the quake.

An epicenter is the location on the surface of the Earth above the fault. Essentially, it is ground zero for the quake, where shaking is most intense and buildings are more likely to collapse.

If the columns and walls of a multi-story building are not stiff and strong enough to resist the forces of an earthquake, gravity takes over. The building usually collapses at the bottom floor level, causing the stories above to follow. Anyone inside can be trapped or crushed by falling debris. Stopping this requires modern design codes[10], significant investment and enforcement of those design codes. There are always challenges – but that doesn’t mean there haven’t been some success stories.

California plans ahead

Consider the city of San Francisco. More than a decade ago, this densely populated Northern California city realized it had thousands of apartment buildings with parking at the ground level. These are known as “soft-story” buildings and are more prone to collapse because they lack the strength and stiffness of reinforcing[11] at the ground level. Many are likely to collapse in a moderate-to-major earthquake, while many more would require months to repair.

Through a self-study completed in 2010[12], San Francisco recognized that even if nobody was killed or injured in an earthquake, damage to these multi-unit residential buildings would result in a significant number of people losing their homes and leaving the city, changing the its character forever. In 2013, the city began a mandatory retrofit program[13]. So far, more than 700 soft-story buildings[14] have been retrofitted. Federal grants of up to US$13,000[15] that became available in early 2023 are expected to accelerate this progress.

Los Angeles[16] followed suit in 2015, passing a law that required retrofitting of both soft-story wood-framed and older concrete buildings prone to collapse. As of 2023, 69% of soft-story buildings in LA[17] had been retrofitted. Progress on the concrete structures was slower but is moving ahead.

Retrofitting the larger multi-unit apartment buildings in San Francisco and LA costs between $60,000 and $130,000 – but the investment for a typical single-family home in the U.S. starts as low as $3,000[18].

Communities outside the U.S. have also built back better after earthquakes.

In 2005, Kobe, Japan, was rocked by a major earthquake that resulted in more than 5,000 fatalities and $200 billion in damage. As the city rebuilt, officials took the opportunity to improve their building code using updated strengthening and stiffening techniques[19].

Christchurch, New Zealand, was devastated in 2011 by two earthquakes that destroyed much of the downtown area. While many buildings didn’t collapse – a sign that the building code worked to some degree – many were damaged beyond repair. Demolishing them presented an opportunity to focus on resilient construction[20].

Amidst the rubble, a team of uniformed firefighters in hard hats search through the debris left by the quake.
In Amizmiz, Morocco, search-and-rescue teams look for survivors trapped beneath the rubble. Davide Bonaldo/SOPA Images/LightRocket via Getty Images[21]

Focusing efforts

So how can people and governments figure out where best to invest to decrease our exposure to natural hazards?

The center I co-direct brings together specialists from 14 universities[22] to determine how to measure a community’s resilience to natural hazards to enable them to plan for, absorb, and recover rapidly from hazards. A policy directive[23] during the Obama administration resulted in funds being focused on improving resilience throughout the U.S.

To improve resilience, we have to be able to quantify and measure it. To do this, we’ve developed a computer model called IN-CORE[24] that communities can use to measure the short- and long-term effects of “what if” scenarios on their households, social institutions, physical infrastructure and local economy. Each interacting algorithm that makes up the model is based on scientifically rigorous research documented in the teams’ almost 200 peer-reviewed publications over the last eight years[25]. Our system allows stakeholders to make resilience-informed decisions and measure the impacts on vulnerable populations. For example, we know that it is vital that social institutions such as schools and hospitals remain intact[26] after a hazard event.

One example of utilizing IN-CORE is the center’s engagement with Salt Lake County, Utah. The county is planning for a major earthquake – an event that is inevitable according to experts from the U.S. Geological Survey[27]. Understanding where investment will have its biggest impact is critical because time and money are limited. Our system will help Salt Lake County determine which building retrofits will provide the most return on investment based on physical services, social services and economic and population stability.

One goal of the IN-CORE Project[28] is to assist communities recently identified by the Federal Emergency Management Agency as Community Disaster Resilience Zones[29], or areas in the U.S. most at risk from the effects of natural hazards and climate change.

More broadly, we plan to partner with communities and regions worldwide, always keeping our eye on ensuring socially equitable solutions. For example, as the earthquake in Morocco shows, it is important to consider not just urban centers, but rural communities – like those in the Atlas Mountains that have suffered so much loss[30].

Read more

Curious Kids[1] is a series for children of all ages. If you have a question you’d like an expert to answer, send it to This email address is being protected from spambots. You need JavaScript enabled to view it.[2].


What happens if you have to go to the bathroom in your sleep? – Calleigh H., age 11, Oklahoma


As you drink water during the day, your body turns extra liquid it doesn’t need into pee. Your bladder stores the urine and eventually alerts you when it’s time to take a trip to the toilet.

But what about at night? How does your body know not to pee while you’re asleep?

Just because you’re snoozing doesn’t mean your body is totally offline – continuous processes like breathing, digestion and, yes, making pee, still happen while you’re asleep. Your bladder and your brain work together to know what to do with that big glass of water you drank before bed.

Using the bathroom every day is routine for many people, so it’s something you might not pay much attention to. But as a pediatric urologist[3], understanding how the brain and bladder work together – and sometimes miscommunicate – is an important part of my job.

The bladder and the brain

The bladder has two main jobs: to safely store urine and to empty it out. While it seems simple, these two tasks take a lot of complex coordination[4] of muscles and nerves – that’s the brain’s job.

For babies and young kids, the bladder has reflexes, meaning it automatically knows when to squeeze the muscles to empty the urine. Since babies can’t control this consciously, they typically wear diapers. But as kids grow[5], the bladder muscles and nerves also grow, which gives a youngster more control over their bladder.

During toilet training[6], which usually happens by the age of 3 or 4 in the U.S., kids learn how to use the toilet voluntarily. This means that they can feel when the bladder is getting full and their brain can receive and understand that signal. The brain can then tell the bladder to “hold it” until they’ve made it to the toilet and it’s safe to pee.

What happens in sleep mode?

Most children first learn how to use the toilet during the day. Using the bathroom overnight can be more difficult[7] because the sleeping brain doesn’t receive signals in the same way as when awake.

While awake, if there’s a loud noise or a bright light, the body senses it and reacts. But during sleep, the body may not hear that noise or see that light because the brain is in sleep mode[8]. Imagine sleeping through an overnight thunderstorm that you didn’t realize happened until you hear people talking about it in the morning. Your brain didn’t process the loud noises because it was focusing on sleep.

The same thing can happen with bladder signals. The bladder fills with urine 24 hours a day, even while you’re snoozing, and it sends signals to the brain when it’s full. In order to help you get enough sleep, your brain will tell your bladder to hold it until morning.

Sometimes, if you really need to go, your brain will tell your body to wake up so you can go empty your full bladder. While it’s normal to wake up to pee sometimes – especially if you drank a big cup of hot chocolate right before bed – most older kids can usually sleep through the night without needing to use the toilet.

When the brain and bladder are working together well, your bladder gradually fills up overnight and hangs on til morning when you stumble into the bathroom to empty it.

Nighttime accidents

But there are many ways the communication between the brain and the bladder can break down. For one, the brain may not get the bladder’s message that it’s time to go. Even if the brain gets the message, it may not be able to tell the bladder to hold on. Or, when the bladder can’t wait, the brain might not tell your body to wake up. If the signals and messages aren’t sent, or are received incorrectly, the bladder will go into reflex mode[9] – it squeezes to empty itself of pee, even though you’re fast asleep in bed.

Wetting the bed at night, which doctors call nocturnal enuresis[10], is more common than you might think. About 15%[11] of kids between ages 5 and 7 wet the bed sometimes. Even some teenagers experience it. It’s more common in boys, and often there’s a family history, meaning parents or relatives may have dealt with nighttime accidents too.

A child's legs, wearing pajama pants, against a grey floor. A wet stain is visible on their bottom and on the ground behind them.
Many children wet the bed at night. Olga Rolenko/Moment via Getty Images[12]

There are a few reasons why nighttime wetting happens. Since kids’ brains are growing and developing, nighttime communication between the brain and bladder can take longer.

Some bodies make more pee at night, making it more likely the bladder will get full during sleep. Some people have smaller bladders that fill up fast. Sometimes having difficulties with sleep or being a deep sleeper can make it harder[13] to wake up at night if you really need to pee.

Most kids who wet the bed at night outgrow it as their brains and bodies continue to develop. At that point, they can sleep through the night without needing to pee, or their bodies are able to wake up at night to use the bathroom when they need to.

If wetting the bed is an issue, there are some things that can help[14], like drinking less liquid in the evening or using the bathroom right before you go to bed. These precautions make it less likely that the bladder will be too full during sleep. There are also bedwetting alarms that can help train the body to wake up when the bladder needs to be emptied. If there are concerns about nighttime accidents, or if accidents start happening in older children, I recommend consulting a doctor.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to This email address is being protected from spambots. You need JavaScript enabled to view it.[15]. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

Read more

The Conversation U.S. launched its new book club with a bang – talking to mathematician Manil Suri[1] about his nonfiction work “The Big Bang of Numbers: How to Build the Universe Using Only Math[2].” Suri, a previous[3] author in[4] The Conversation[5], has also written an award-winning fiction trilogy[6], in addition to being a professor of mathematics and statistics at the University of Maryland, Baltimore County.

Below is an edited excerpt from the book club discussion. You’re welcome to keep the conversation flowing by adding your own questions for Suri to the comments.

Watch the full book club meeting and leave your own question in the comments at the bottom of this article.

What is the Big Bang of numbers and where do you go from there in the book?

I think the story for me started way back when I was an undergraduate in Bombay. My algebra professor told us this very famous saying by Leopold Kronecker[7], the famous mathematician, that God gave us the integers and all the rest is the work of human beings. What he meant was that once you have the whole numbers – 1, 2, 3, 4 – which are somehow coming from heaven, then you can build up the rest of mathematics from it.

And then he went on and said, Hey, I can actually do better. I don’t need God. I can actually, as a mathematician, create the numbers out of nothing. And he showed us this marvelous, almost magic trick, where you start with something called the empty set and then you start building the numbers.

It was the closest I’ve been to a religious experience, almost like the walls just dissolved and suddenly there were numbers everywhere.

Once I started writing my novels, I was meeting a lot of people who were artists and writers. And they would always say, you know, we used to love math when we were in school, but afterward we never had a chance to really pursue it. And can you tell us something about your mathematics?

So, I started building a kind of talk, which started with this big bang, as I call it, building the numbers out of nothing. I finally decided I should write a math book, and it would be aimed at a wide audience.

black and white photo of a sea shell with light triangles of various sizes
Patterns in nature, like the triangles on this shell, can be explained by simple mathematical rules. Larry Cole[8]

And I said, well, can you go further? You can create the numbers, but can you actually start building everything, including the whole universe from that? So that was a way to try to lay out mathematics almost as a story where one thing follows from the other and everything is embedded in one narrative.

Who were you imagining to be your readers as you were writing the book?

There’s just so much joy to be had out of mathematics, so many things that you don’t really see in normal courses where the emphasis is always on doing the calculations, finding the right answer. So this book is written for people who want to really engage with mathematics on the level of ideas rather than get into computations and calculations.

After you set off your Big Bang of numbers, you dig in to some of life’s big questions. What do you see as math’s role in grappling with those big thoughts, like where the universe came from, why we even exist and so on?

Once you start talking about the Big Bang, what comes into your mind is creation. There is a doctrine called creatio ex nihilo, which is basically creating everything out of nothing.

That’s a cornerstone of many religions where God creates the universe out of nothing. It’s also in some sense being explored by physicists, where you have some sort of singularity and from that, everything emerges in the Big Bang.

So my thought was, both these areas, religion and physics, are in the public’s imagination much more than mathematics is. Is there a way to posit math as the creative force of everything?

Physicist Eugene Wigner[9], who was a Nobel laureate, talked about the “unreasonable effectiveness” of mathematics at describing everything in our physical universe. It’s so good at modeling physics and what have you. Could it be that math is really the true driving force of the universe? Rather than us just inventing it and using it to describe the universe, could the universe really be describing mathematics? Then the universe is just a physical manifestation, an approximation, if you will, of those mathematical ideas. It’s a completely different view of math.

There’s an ongoing debate over whether math is something that people invented or whether it’s something that exists independently of us. In the book, you say that perhaps the deepest insight that math can offer us is that it’s both of those things.

So the glib answer to your question whether math’s invented or discovered is that you have to create a new word. Instead of discovered or invented it’s “disvented.”

What I mean by that is simply that there are some questions we really can’t get to any kind of logical or supportable answer. One is the question of our own existence – people might believe one thing or the other, but it always comes down to: Is there some real purpose to our lives, or is our creation just something that happened randomly – you know, molecules getting together?

silhouette of a head with lots of math notations exploding out
Is math something that is born from the human mind? 'The Big Bang of Numbers'[10]

Now if we invent mathematics, then we’re inventing it for a purpose. If it just generates by itself, starting with emptiness, building around numbers in some strange realm that we don’t know about, then it’s just wafting around, purposeless.

Math has that duality that can’t be resolved. So it’s a metaphor, telling us, hey, you can’t decide for math, and you’ll never be able to decide for yourself about your own existence.

Can you tell us a bit about your previous books, the Indian novels?

The first one was called “The Death of Vishnu[11].” I went back to visit my parents in Mumbai in around 1995, and this man Vishnu, who used to live in our building and do errands, was dying on our steps. I started writing this as a short story.

It started going into a more philosophical realm when a writing teacher said, you know, Vishnu is also the name of the caretaker of the universe in Hindu mythology. So if you name somebody Vishnu, you need to somehow explore that. So that’s what opened up this whole new world for me.

The second book was “The Age of Shiva[12].” That one’s the journey of a woman right after India’s independence in 1947. She’s making her way in a very male-dominated world, and she’s not perfect.

Then the third one, I decided, OK, I need to put in some science and math characters. So “The City of Devi[13]” actually has both a physicist and a statistician. Again it’s in Mumbai, set in the future with the threat of a nuclear war with Pakistan and a love triangle unfolding in front of that.

It’s kind of interesting. I thought that I was done with this mythical “where do we come from?” kind of philosophy that I had in the three books, but apparently not, because now “The Big Bang of Numbers[14]” looks at it from a mathematical perspective.

Read more

Dopamine seems to be having a moment in the zeitgeist. You may have read about it in the news[1], seen viral social media posts[2] about “dopamine hacking” or listened to podcasts[3] about how to harness what this molecule is doing in your brain to improve your mood and productivity. But recent neuroscience research suggests that popular strategies to control dopamine are based on an overly narrow view of how it functions.

Dopamine is one of the brain’s neurotransmitters[4] – tiny molecules that act as messengers between neurons. It is known for its role in tracking your reaction to rewards such as food, sex, money or answering a question correctly. There are many kinds of[5] dopamine neurons[6] located in the uppermost region of the brainstem that manufacture and release dopamine throughout the brain. Whether neuron type affects the function of the dopamine it produces has been an open question.

Recently published research reports a relationship between neuron type and dopamine function, and one type of dopamine neuron has an unexpected function[7] that will likely reshape how scientists, clinicians and the public understand this neurotransmitter.

Dopamine is involved with more than just pleasure.

Dopamine neuron firing

Dopamine is famous for the role it plays in reward processing, an idea that dates back at least 50 years[8]. Dopamine neurons monitor the difference between the rewards you thought you would get from a behavior and what you actually got. Neuroscientists call this difference a reward prediction error[9].

Eating dinner at a restaurant that just opened and looks likely to be nothing special shows reward prediction errors in action. If your meal is very good, that results in a positive reward prediction error, and you are likely to return and order the same meal in the future. Each time you return, the reward prediction error shrinks until it eventually reaches zero when you fully expect a delicious dinner. But if your first meal was terrible, that results in a negative reward prediction error, and you probably won’t go back to the restaurant.

Dopamine neurons communicate reward prediction errors to the brain through their firing rates and patterns of dopamine release, which the brain uses for learning[10]. They fire in two ways[11].

Phasic firing refers to rapid bursts that cause a short-term peak in dopamine. This happens when you receive an unexpected reward or more rewards than anticipated, like if your server offers you a free dessert or includes a nice note and smiley face on your check. Phasic firing encodes reward prediction errors.

By contrast, tonic firing describes the slow and steady activity of these neurons when there are no surprises; it is background activity interspersed with phasic bursts. Phasic firing is like mountain peaks, and tonic firing is the valley floors between peaks.

Diagram depicting the phasic peaks and tonic valleys of dopamine levels
This diagram shows the phasic peaks and tonic valleys of dopamine levels, the former encoding unexpected rewards and the latter encoding expected events. Dreyer et al. 2010/Journal of Neuroscience[12], CC BY-NC-SA[13]

Dopamine functions

Tracking information used in generating reward prediction errors is not all dopamine does. I have been following all the other jobs of dopamine with interest[14] through[15] my own[16] research measuring[17] brain areas where dopamine neurons are located in people.

About 15 years ago, reports started coming out that dopamine neurons respond to[18] aversive events[19] – think brief discomforts like a puff of air against your eye, a mild electric shock or losing money – something scientists thought dopamine did not do[20]. These studies showed that some dopamine neurons respond only to rewards while others respond to both rewards and negative experiences, leading to the hypothesis that there might be more than one dopamine system[21] in the brain.

These studies were soon followed by experiments showing that there is more than one type of dopamine neuron. So far, researchers have identified seven distinct types[22] of dopamine neurons by looking at their genetic profiles.

A study published in August 2023 was the first to parse dopamine function based on neuron subtype[23]. The researchers at the Dombeck Lab[24] at Northwestern University examined three types of dopamine neurons and found that two tracked rewards and aversive events while the third monitored movement, such as when the mice they studied started running faster.

Dopamine release

Recent media coverage on how to control dopamine’s effects is based only on the type of release that looks like peaks and valleys. When dopamine neurons fire in phasic bursts, as they do to signal reward prediction errors, dopamine is released throughout the brain. These dopamine peaks happen very fast[25] because dopamine neurons can fire many times in less than a second.

There is another way that dopamine release happens: Sometimes it increases slowly until a desired reward is obtained. Researchers discovered this ramp pattern[26] 10 years ago in a part of the brain called the striatum. The steepness of the dopamine ramp tracks how valuable a reward is and how much effort it takes to get it. In other words, it encodes motivation.

The restaurant example can also illustrate what happens when dopamine release occurs in a ramping pattern. When you have ordered a meal you know is going to be amazing and are waiting for it to arrive, your dopamine levels are steadily increasing. They reach a crescendo when the server places the dish on your table and you sink your teeth into the first bite.

Diagram of ramp pattern dopamine release, which shows a steep rise that levels off
This diagram shows a ramp pattern dopamine release, reaching a peak when a reward is obtained. Collins et al. 2016/Scientific Reports[27], CC BY[28]

How dopamine ramps happen is still unsettled, but this type of release is thought to underlie goal pursuit and learning[29]. Future research on dopamine ramping will affect how scientists understand motivation and will ultimately improve advice on how to optimally hack dopamine.

Dopamine(s) in disease and neurodiversity

Though dopamine is known for its involvement in drug addiction[30], neurodegenerative disease[31] and neurodevelopmental conditions[32] like attention-deficit/hyperactivity disorder, recent research suggests how scientists understand its involvement may soon need updating. Of the seven subtypes of dopamine neurons that are known so far, researchers have characterized the function of only three.

There is already some evidence that the discovery of dopamine diversity is updating scientific knowledge of disease. The researchers of the recent paper identifying the relationship between dopamine neuron type and function point out that movement-focused dopamine neurons[33] are known to be among the hardest hit in Parkinson’s disease, while two other types are not as affected. This difference might lead to more targeted treatment options.

Ongoing research untangling the diversity of dopamine will likely continue to change, and improve, our understanding of disease and neurodiversity.

Read more

Dark clouds over the Kennedy Space Center

NASA’s independent study team[1] released its highly anticipated report[2] on UFOs on Sept. 14, 2023.

In part to move beyond the stigma often attached to UFOs[3], where military pilots fear ridicule or job sanctions if they report them, UFOs are now characterized by the U.S. government as UAPs, or unidentified anomalous phenomena.

Bottom line: The study team found no evidence that reported UAP observations are extraterrestrial.

I’m a professor of astronomy[4] who has written extensively on astrobiology[5] and the scientists[6] who search for life in the universe. I have long been skeptical of the claim[7] that UFOs represent visits by aliens to Earth.

From sensationalism to science

During a press briefing[8], NASA Administrator Bill Nelson[9] noted that NASA has scientific programs to search for traces of life on Mars[10] and the imprints of biology[11] in the atmospheres of exoplanets. He said he wanted to shift the UAP conversation from sensationalism to one of science.

With this statement, Nelson was alluding to some of the more outlandish claims about UAPs and UFOs. At a congressional hearing in July[12], former Pentagon intelligence officer David Grusch testified[13] that the American government has been hiding evidence of crashed UAPs and alien biological specimens. Sean Kirkpatrick[14], head of the Pentagon office charged with investigating UAPs, has denied these claims.

And the same week NASA’s report came out, Mexican lawmakers[15] were shown by journalist Jaime Maussan two tiny, 1,000-year-old bodies that he claimed were the remains of “non-human” beings. Scientists have called this claim fraudulent[16] and say the mummies may have been looted from gravesites in Peru.

A controversial journalist presented the Mexican government with 1,000-year-old bodies that he claimed were aliens.

Conclusions from the report

The NASA study team report sheds little light on whether some UAPs are extraterrestrial. In his comments, the chair of the study team, astronomer David Spergel[17] stated that the team had seen “no evidence to suggest that UAPs are extraterrestrial in origin.”

Of the more than 800 unclassified sightings collected by the Department of Defense’s All-domain Anomaly Resolution Office[18] and reported at the NASA panel’s first public meeting[19] back in May 2023, only “a small handful cannot be immediately identified as known human-made or natural phenomena,” according to the report[20].

Many of the recent sightings[21] can be attributed to weather balloons and airborne clutter. Historically, most UFOs are astronomical objects[22] such as meteors, fireballs[23] and the planet Venus[24].

Some sightings represent surveillance operations[25] by foreign powers, which is why the U.S. military considers this a national security issue[26].

The report does offer recommendations to NASA on how to move these investigations forward.

Most of the UAP data considered by the study team comes from U.S. military aircraft. Analysis of this data is “hampered by poor sensor calibration, the lack of multiple measurements, the lack of sensor metadata, and the lack of baseline data.” The ideal set of measurements would include optical imaging, infrared imaging, and radar data, but very few reports have all these.

The NASA study team described in the report the types of data that can shed more light on UAPs. The authors note the importance of reducing the stigma that can cause both military and commercial pilots to feel that they cannot freely report sightings. The stigma stems from decades of conspiracy theories tied to UFOs[27].

The NASA study team suggests gathering sightings by commercial pilots using the Federal Aviation Administration and combining these with classified sightings not included in the report. Team members did not have security clearance, so they could look only at the subset of military sightings that were unclassified. At the moment, there is no anonymous nationwide UAP reporting mechanism for commercial pilots.

With access to these classified sightings and a structured mechanism for commercial pilots to report sightings, the All-domain Anomaly Resolution Office[28] – the military office charged with leading the analysis effort – could have the most data.

NASA also announced[29] the appointment of a new director of research on UAPs. This position will oversee the creation of a database with resources to evaluate UAP sightings.

Looking for a needle in a haystack

Parts of the briefing resembled a primer on the scientific method. Using analogies, officials described the analysis process as looking for a needle in a haystack, or separating the wheat from the chaff. The officials said they needed a consistent and rigorous methodology for characterizing sightings, as a way of homing in on something truly anomalous.

Spergel said the study team’s goal was to characterize the hay – or the mundane phenomena – and subtract it to find the needle, or the potentially exciting discovery. He noted that artificial intelligence can help researchers comb through massive datasets to find rare, anomalous phenomena. AI is already being used this way in many areas of astronomy research[30].

The speakers noted the importance of transparency. Transparency is important because UFOs have long been associated with conspiracy theories and government cover-ups[31]. Similarly, much of the discussion during the congressional UAP hearing[32] in July focused on a need for transparency. All scientific data that NASA gathers is made public on various websites, and officials said they intend to do the same with the nonclassified UAP data.

At the beginning of the briefing[33], Nelson gave his opinion that there were perhaps a trillion instances of life beyond Earth. So, it’s plausible that there is intelligent life out there. But the report says that when it comes to UAPs, extraterrestrial life must be the hypothesis of last resort. It quotes Thomas Jefferson: “Extraordinary claims require extraordinary evidence.” That evidence does not yet exist.

Read more

China[1], India[2] and the U.S.[3] have all achieved landing on the Moon in the 2020s.

Once there, their eventual goal is to set up a base[4]. But a successful base – along with the spacecraft that will carry people to it – must be habitable for humans. And a big part of creating a habitable base is making sure the heating and cooling systems work.

That’s especially true because the ambient temperature of potential places for a base can vary widely. Lunar equatorial temperatures[5] can range from minus 208 to 250 degrees Farenheit (minus 130 to 120 degrees Celsius) – and similarly, from minus 225 F to 70 F (minus 153 C to 20 C) on Mars[6].

In 2011, the National Academies of Science published a report[7] outlining research in the physical and life sciences that scientists would need to do for the U.S. space program to succeed. The report emphasized the need for research about building heating and cooling systems for structures in space.

I’m an engineering professor[8], and when that report came out, I submitted a research proposal to NASA. I wanted to study something called the liquid-vapor phenomenon. Figuring out the science behind this phenomenon would help with these big questions around keeping structures in space a comfortable and habitable temperature.

Over a decade after we submitted a proposal, my team’s project is now being tested on the International Space Station.

Going with the ‘flow’

Liquid-vapor systems – or two-phase systems – involve the simultaneous flow of liquid and vapor[9] within a heating or cooling system. While many commercial air conditioners and refrigeration systems on Earth use two-phase systems, most systems used in spacecraft and on the International Space Station are purely liquid systems – or one-phase systems.

In one-phase systems, a liquid coolant moves through the system and absorbs excess heat, which raises the liquid’s temperature. This is similar to the way cars use radiators to cool[10]. Conversely, heated liquid in the system would eject the heat out to the ambient area, lowering the liquid’s temperature to its initial level.

But liquid-vapor systems could transfer heat more effectively[11] than these one-phase systems, and they’re much smaller and lighter than purely liquid systems. When traveling in space, you have to carry everything on the craft with you, so small and light equipment is essential.

There are two key processes that happen in a closed, two-phase liquid-vapor system. In one, the liquid changes to a vapor during a process called “flow boiling[12].” Just like boiling water on the stove, in flow boiling the liquid heats up and evaporates.

In systems used in space, the two-phase mixture passes through heat exchange components that transfer the heat generated from electronics, power devices and more into the mixture. This gradually increases the amount of vapor produced as the system absorbs heat and converts liquid to vapor.

Then, there’s flow condensation[13], in which the vapor cools and returns to a liquid. During flow condensation, heat leaves the system by radiating out into space.

Scientists control these two processes in a closed loop[14] so they can extract and use the heat that’s released during condensation. In the future, this technology could be used to control temperature in spacecraft going to the Moon, Mars or beyond, or even in settlements or habitats on the lunar and Martian surfaces.

Building and testing

With the grant from NASA to do this work, I designed an experimental program called the “Flow Boiling and Condensation Experiment[15].” My team built a fluid management system for the experiment and two test modules: one that helped us test flow boiling and one that helped us test flow condensation.

The International Space Station orbiting the Earth, shown below, with the Sun shown from a distance.
The Flow Boiling and Condensation Experiment is undergoing tests on the International Space Station. 3DSculptor/iStock via Getty Images[16]

Right now, the equipment used for heating and cooling[17] in space was designed based on experiments in Earth’s gravity. Our flow boiling and condensation experiment seeks to change that.

First, we tested[18] whether the system and modules we built worked when subjected to Earth’s gravity. Once we learned they did, we sent them up in a parabolic flight aircraft[19]. This craft simulated reduced gravity[20] so we could get an idea of how the system performed in an environment similar to that of space.

In August 2021 we completed the flow boiling module and launched it to the International Space Station for testing in zero gravity[21]. By July 2022 we’d completed the boiling experiments. In August 2023 the flow condensation module followed, and we’ll start working on the final condensation tests soon.

Responding to reduced gravity

Liquid-vapor flow systems[22] are far more sensitive to gravity than the purely liquid systems used now, so it’s harder to design ones that work under reduced gravity.

The mechanism behind these systems has to do with the motion of liquid relative to the vapor, and what that motion looks like depends on a concept called buoyancy[23].

Buoyancy is determined by gravity as well as the density difference between liquid and vapor. So any change in gravity affects the system’s buoyancy, and thus the movement of the vapor relative to the liquid.

In space, there are also different strengths of gravity that the systems might need to operate under. Space vehicles experience microgravity[24] – near weightlessness – while a lunar habitat would operate under gravity conditions about one-sixth the strength of Earth’s gravity[25], and a Martian habitat would be operating under gravity three-eighths the strength[26] of Earth’s gravity.

Our team is working on designing flow boiling and condensation models that can work under all these levels of reduced gravity.

Vapor condensing in microgravity in a flow condensation module.

Applications for space habitats

This equipment could one day go into a human habitat on the Moon or Mars, where it would help maintain comfortable temperatures for people and machinery inside. A heat pump[27] using our flow boiling and flow condensation systems could extract the heat that astronauts and their machines give off. It would then send this collected heat out of the habitat to keep the inside cool – similar to the way air conditioners on Earth work.

The temperatures in space can be extreme and hostile to people, but with these technologies, my team might one day help create craft and habitats that allow people to explore the Moon and beyond.

Read more

Few beverages have as rich a heritage and as complicated a chemistry as bourbon whiskey, often called “America’s spirit[1].” Known for its deep amber hue and robust flavors, bourbon has captured the hearts[2] of enthusiasts across the country[3].

But for a whiskey to be called a bourbon, it has to adhere to very specific rules[4]. For one, it needs to be made in the U.S. or a U.S. territory – although almost all is made in Kentucky. The other rules have more to do with the steps to make it – how much corn is in the grain mixture, the aging process and the alcohol proof.

I’m a bourbon researcher[5] and chemistry professor[6] who teaches classes on fermentation, and I’m a bourbon connoisseur myself. The complex science[7] behind this aromatic beverage reveals why there are so many distinct bourbons, despite the strict rules around its manufacture.

The mash bill

All whiskeys have what’s called a mash bill. The mash bill refers to the recipe of grains that makes up the spirit’s flavor foundation. To be classified as bourbon, a spirit’s mash bill must have at least 51% corn[8] – the corn gives it that characteristic sweetness.

Almost all bourbons also have malted barley, which lends a nutty, smoky flavor and provides enzymes that turn starches into sugars[9] later in the production process.

Many distillers also use rye and wheat[10] to flavor their bourbons. Rye makes the bourbon spicy, while wheat produces a softer, sweeter flavor. Others might use grains like rice or quinoa[11] – but each grain chosen, and the amount of each, affects the flavor down the line.

The chemistry of yeast

Once distillers grind the grains from the mash bill and mix them with heated water, they add yeast to the mash. This process is called “pitching the yeast.” The yeast consumes sugars and produces ethyl alcohol and carbon dioxide as byproducts during the process called fermentation – that’s how the bourbon becomes alcoholic[12].

The fermented mash is now called “beer.” While similar in structure and taste to the beer you might buy in a six-pack, this product still has a way to go before it reaches its final form.

Yeast fermentation yields other byproducts besides alcohol and carbon dioxide, including flavor compounds called congeners[13]. Congeners can be esters, which produce a fruity or floral flavor, or complex alcohols, which can taste strong and aromatic.

The longer the fermentation period, the longer the yeast has to create more flavorful byproducts[14], which enhances the complexity of the spirit’s final taste. And different yeasts produce different amounts of congeners[15].

Separating the fermentation products

During distillation, distillers separate the alcohol and congeners from the fermented mash of grains, resulting in a liquid spirit. To do this, they use pot or column stills[16], which are large kettles or columns, respectively, often made at least partially of copper. These stills heat the beer and any congeners that have a boiling point of less than 350 degrees Fahrenheit (176 degrees Celsius) to form a vapor.

A row of large copper apparatuses, with a bottom like an upturned bowl and a long cylindrical column protruding from the center.
Pot stills in a distillery. FocusEye/E+ via Getty Images[17]

The type of still[18] will influence the beverages’ final flavor, because pot stills often do not separate the congeners as precisely as column stills do. Pot stills result in a spirit that often contains a more complex mixture of congeners[19].

The desired vapors that exit the still are condensed back to liquid form, and this product is called the distillate[20].

A cylindrical copper apparatus with silver holes lined up in the middle and pipes coming off it.
A column still. MattBarlow92/Wikimedia Commons[21]

Different chemical compounds have different boiling points, so distillers can separate the different chemicals by collecting the distillate at different temperatures[22]. So in the case of the pot still, as the kettle is heated, chemicals that have lower boiling points are collected first. As the kettle heats further, chemicals with higher boiling points vaporize and then are condensed and collected[23].

By the end of the distillation process with a pot still, the distillate has been divided into a few fractions[24]. One of these fractions is called the “hearts[25],” containing mostly ethanol and water, but also small amounts of congeners, which play a big role in the final flavor of the product.

The alchemy of time and wood

After distillation, the “hearts” fraction (which is clear and resembles water) is placed in a charred oak barrel for the aging process. Here, the bourbon interacts with chemicals in the barrel’s wood, and about 70% of the bourbon’s final flavor[26] is determined by this step. The bourbon gets all its amber color during the aging process.

Bourbon may rest in the barrel for several years. During the summer, when the temperature is hot, the distillate can pass through the inner charred layer of the barrel. The charred wood acts like a filter and strains out[27] some of the chemicals before the distillate seeps into the wood. These chemicals bind to the charred layer and do not release, kind of like a water filter.

A dark, dusty wooden room with a wall lined with barrels stacked on wood shelves.
Barrels of bourbon age in a rickhouse, where they take on flavors from the barrel’s wood. The_Goat_Path/iStock via Getty Images[28]

Under the charred layer of the barrel is a “red line,” a layer where the oak was toasted during the charring process of making the barrel. The toasting process breaks down starch and other polymers[29], called lignins and tannins, in the oak.

When the distillate seeps to the red-line layer, it dissolves the sugars[30] in the barrel, as well as lignin byproducts and tannins.

During the cold winter months, the distillate retreats back into the barrel, but it takes with it these sugars, tannins and lignin byproducts from the wood, which enhance the flavors. If you disassemble a barrel after it has aged bourbon, you can see a “solvent line[31],” which shows how far into the wood the distillate penetrated. The type of oak barrel can have a profound effect on the final taste, along with the barrel’s size and how charred it is.

For most distilleries, barrels are stored in large buildings called rickhouses[32]. Ethyl alcohol and water in the distillate evaporate out of the barrel, and the humidity in that part of the rickhouse plays a big role.

Lower humidity often leads to higher-proof bourbon, as more water than ethanol leaves the barrel. In addition, air enters the barrel, and oxygen from the air reacts with some of the chemicals in the bourbon, creating new flavor chemicals. These reactions tend to soften the taste[33] of the final product.

There are thousands of bourbons[34] on the market, and they can be distinguished by their unique flavors and aromas. The variety of brands reflects the many choices that distillers make on the mash bill, fermentation and distillation conditions, and aging process. No two bourbons are quite the same.

Read more

More Articles …