The Power of Truth® has been released for sale and assignment to a conservative pro-American news outlet, cable network, or other media outlet that wants to define and brand its operation as the bearer of the truth, and set itself above the competition.

In every news story the audience hears of censorship, speech, and the truth. The Power of Truth® has significant value to define an outlet, and expand its audience. A growing media outlet may decide to rebrand their operation The Power of Truth®. An established outlet may choose to make it the slogan distinguishing their operation from the competition. You want people to think of your outlet when they hear it, and think of the slogan when they see your company name. It is the thing which answers the consumer's questions: Why should I choose you? Why should I listen to you? Think:

  • What’s in your wallet -- Capital One
  • The most trusted name in news – CNN
  • Fair and balanced - Fox News
  • Where’s the beef -- Wendy’s
  • You’re in good hands -- Allstate
  • The ultimate driving machine -- BMW

The Power of Truth® is registered at the federal trademark level in all applicable trademark classes, and the sale and assignment includes the applicable domain names. The buyer will have both the trademark and the domains so that it will control its business landscape without downrange interference.

Contact: Truth@ThePowerOfTruth.com

Electrons moving around in a molecule might not seem like the plot of an interesting movie. But a group of scientists will receive the 2023 Nobel Prize in physics[1] for research that essentially follows the movement of electrons[2] using ultrafast laser pulses, like capturing frames in a video camera.

However, electrons, which partly make up atoms[3] and form the glue that bonds atoms in molecules together, don’t move around on the same time scale people do. They’re much faster. So, the tools that physicists like me[4] use to capture their motion have to be really fast – attosecond-scale fast.

One attosecond[5] is one billionth of a billionth of a second (10⁻¹⁸ second) – the ratio of one attosecond to one second is the same as the ratio of one second to the age of the universe.

Attosecond pulses

In photography, capturing clear images of fast objects requires a camera with a fast shutter[6] or a fast strobe of light to illuminate the object. By taking multiple photos in quick succession, the motion of the object can be clearly resolved.

The time scale of the shutter or the strobe must match the time scale of motion of the object – if not, the image will be blurred. This same idea applies when researchers attempt to image the ultrafast motion of electrons[7]. Capturing attosecond-scale motion requires an attosecond strobe. The 2023 Nobel laureates in physics[8] made seminal contributions to the generation of such attosecond laser strobes, which are very short pulses generated using a powerful laser.

Imagine the electrons in an atom are constrained within the atom by a wall. When a femtosecond (10⁻¹⁵ second) laser pulse from a high-powered femtosecond laser is directed at atoms of a noble gas such as argon, the strong electric field in the pulse lowers the wall.

This is possible because the laser electric field is comparable in strength to the electric field of the nucleus of the atom. Electrons see this lowered wall and pass through in a bizarre process called quantum tunneling[9].

As soon as the electrons exit the atom, the laser’s electric field captures them, accelerates them to high energies and slams them back into their parent atoms. This process of recollision results in creation of attosecond bursts of laser light.

A diagram showing how electrons gain, then release energy when exposed to a laser's electric field, with a pink arrow showing the laser's energy and small drawings of spheres stuck together indicating the atom.
A laser’s electric field allows electrons to escape from the atom, gain energy and then release energy as they’re reabsorbed back into the atom. Johan Jarnestad/The Royal Swedish Academy of Sciences[10], CC BY-NC-ND[11]

Attosecond movies

So how do physicists use these ultrashort pulses to make movies of electrons at the attosecond scale?

Conventional movies are made one scene at a time, with each instant captured as a frame with video cameras. The scenes are then stitched together to form the complete movie.

Attosecond movies of electrons use a similar idea. The attosecond pulses act as strobes, lighting up the electrons so researchers can capture their image, over and over again as they move – like a movie scene. This technique is called pump-probe spectroscopy[12].

However, imaging electron motion directly inside atoms is currently challenging, though researchers are developing several approaches using advanced microscopes to make direct imaging possible[13].

Typically, in pump-probe spectroscopy, a “pump” pulse gets the electron moving and starts the movie. A “probe” pulse then lights up the electron at different times after the arrival of the pump pulse, so it can be captured by the “camera,” such as a photoelectron spectrometer[14].

Pump-probe spectroscopy.

The information on the motion of electrons, or the “image,” is captured using sophisticated techniques. For example, a photoelectron spectrometer detects how many electrons were removed from the atom by the probe pulse, or a photon spectrometer[15] measures how much of the probe pulse was absorbed by the atom.

The different “scenes” are then stitched together to make the attosecond movies of electrons. These movies help provide fundamental insight, with help from sophisticated theoretical models[16], into attosecond electronic behavior.

For example, researchers have measured where the electric charge is located[17] in organic molecules at different times, on attosecond time scales. This could allow them to control electric currents on the molecular scale.

Future applications

In most scientific research, fundamental understanding of a process leads to control of the process, and such control leads to new technologies. Curiosity-driven research[18] can lead to unimaginable applications in the future, and attosecond science is likely no different.

Understanding and controlling the behavior of electrons on the attosecond scale could enable researchers to use lasers to control chemical reactions[19] that they can’t by other means. This ability could help engineer new molecules that cannot be created with existing chemical techniques.

The ability to modify electron behavior could lead to ultrafast switches. Researchers could potentially convert an electric insulator to a conductor on attosecond scales[20] to increase the speed of electronics. Electronics currently process information at the picosecond scale, or 10⁻¹² of a second.

The short wavelength of attosecond pulses, which is typically in the extreme-ultraviolet, or EUV, regime, may see applications in EUV lithography[21] in the semiconductor industry. EUV lithography uses laser light with a very short wavelength to etch tiny circuits on electronic chips.

A line of silver pipes and machinery, in a bright room, with red and blue handles.
The Linac Coherent Light Source at SLAC National Accelerator Laboratory. Department of Energy[22], CC BY[23]

In the recent past, free-electron lasers such as the Linac Coherent Light Source[24] at SLAC National Accelerator Laboratory in the United States have emerged as a source of bright X-ray laser light. These now generate pulses on the attosecond scale, opening many possibilities for research using attosecond X-rays.

Ideas to generate laser pulses on the zeptosecond (10⁻²¹ second) scale have also been proposed. Scientists could use these pulses, which are even faster than attosecond pulses, to study the motion of particles like protons within the nucleus.

With numerous research groups actively working on exciting problems in attosecond science, and with 2023’s Nobel Prize in physics[25] recognizing its importance, attosecond science has a long and bright future.

Read more

Elon Musk’s vision of Twitter, now rebranded as X, as an “everything app”[1] is no secret. When the X logo replaced Twitter’s blue bird[2], the internet buzzed with heated discussions[3] about just what it would mean for X to be an everything app.

Musk promoted his super app project by referring to the Chinese all-in-one app WeChat[4]. But for many American users unfamiliar with WeChat, a train of questions followed. What’s it like to use WeChat? How has WeChat become “everything” in China? Would it be possible to replicate the app’s success in the U.S.[5]?

I’m a Chinese digital media scholar[6], and I’ve used WeChat since 2012. But, in contrast to Musk’s enthusiasm, I don’t think WeChat is something to write home about. I believe it’s ordinary rather than special, lacking distinctive features compared with the other popular apps I studied for my current book project about Chinese touchscreen media.

WeChat’s inconspicuousness on my phone screen is no accident. Although WeChat is an everything app in the sense of being a digital hub for over a billion users, the app’s design is intentionally grounded in a more nuanced and philosophical meaning of the word “everything” than you might expect.

WeChat is an all-inclusive media ecosystem

Launched in 2011, WeChat[7] has become an all-in-one app that offers services covering most aspects of everyday life, from instant messaging and mobile payments to photo- and video-sharing social networking. It has become a staple of daily activities for 1.3 billion Chinese mobile users[8].

WeChat is also the app that China-bound travelers can download if they want to install only one app. WeChat can help you fill out customs declaration forms, call a taxi, pay for your hotel room and order food. Without WeChat, a traveler in China would be like a fish out of water[9], since everything in China now runs through smartphone screens and mobile payment platforms.

A smart phone screen displaying a messaging app with Chinese text
A smartphone displays WeChat’s group-messaging function. Ou Dongqu/Xinhua via Getty Images[10]

In this sense, WeChat is indeed an everything app. Its “everythingness” refers to its near omnipresence and omnipotence in everyday life. The app creates an all-encompassing and ever-expanding media ecosystem that influences users’ daily activities. It forms a gigantic digital hub that, as German philosopher and media theorist Peter Sloterdijk once described[11], “has drawn inwards everything that was once on the outside.”

This “everythingness” leaves little room for rival companies to achieve similar dominance and turns every tap or swipe on a user’s smartphone into something a big tech company can profit from. This dream of an internet empire is perhaps what is so enticing for tech leaders like Musk[12].

A counterintuitive design philosophy

Despite WeChat’s status as an everything app, it’s one of the least notable and attractive apps on my smartphone. WeChat rarely changes its logo to celebrate holidays or sends admin notifications to users. The app forms a relatively closed social space, since WeChat users can see only what their contacts post, unlike apps like Weibo[13] or TikTok[14], where celebrities amass millions of followers.

graphic of a small figure of a person against a large moon-like orb
WeChat’s splash screen is visually clean and has been unchanged for a decade. Screen capture by Jianqing Chen

But the lack of flashy, attention-grabbing features is actually one of WeChat’s intentional design philosophies, as WeChat’s founder and chief developer Allen Xiaolong Zhang made clear in his annual public speeches in 2019 and 2020[15]. Zhang emphasized that one of WeChat’s design principles is to “get users out of the app as fast as possible,” meaning to reduce the amount of time users spend in WeChat.

This might seem paradoxical – if WeChat is trying to get its users to leave the app as fast as possible, how can it maintain its internet empire? Typically an app’s popularity is assessed based on how long users spend in the app, and users’ attention is the scarce resource[16] various digital platforms fight for.

But Zhang claims that in order to sustain users’ daily engagement with the app in the long run, it’s important to let them leave the app as fast as possible. A low demand for time and effort is key to bringing users back into the app without exhausting them.

A Taoist message behind WeChat’s design

The design of WeChat miniprograms[17] makes Zhang’s idea clear. Miniprograms are embedded into WeChat as third-party developed sub-applications, and they provide users with easy access to a large range of services – like hailing a taxi, ordering food, buying train tickets and playing games – without leaving WeChat. Users can simply search in the app or scan a QR code to open a miniprogram, skipping the cumbersome processes of installing and uninstalling new apps.

A screenshot of a smartphone homepage, with round circular apps and text in Chinese
WeChat has a panel of miniprograms that users pull down from the top of the screen. Screen capture by Jianqing Chen, CC BY-ND[18]

Miniprograms are stored in a hidden panel at the top of the screen. They can be opened by swiping down the screen. These miniprograms appear to be ephemeral, diffusive and almost atmospheric. They give users the feeling that WeChat has disappeared or merged into the environment.

WeChat is what media scholars call “elemental[19]”: inconspicuous and nonintrusive, yet pervasive and as fundamental as the natural elements, just like air, water and clouds.

This environment of pervasiveness and unobtrusiveness resonates with the ancient Chinese Taoist philosophy that understands nothing (wu 无, or “not-being”) as that which forms the basis of all things (wanwu 万物 or “ten thousand things”). As Tao Te Ching states[20], “Dao begets One (or nothingness), One begets Two (yin and yang), Two begets Three (Heaven, Earth and Man; or yin, yang and breath qi), Three begets all things.” For Taoist thinkers, not-being determines how all things within the cosmos come into being, evolve and disappear.

Although the depth of these sagely texts is unfathomable, the Taoist thoughts from the past help people appreciate the interplay of everything and nothing. This perspective adds another layer of meaning to “everything” and opens up alternative visions of what an everything app can be.

Perhaps WeChat’s interpretation of the word “everything” – as simultaneously pervasive and inconspicuous – is the secret to its success over the past 10 years. I believe many tech leaders could benefit from a more sophisticated understanding of “everything” when envisioning the everything app, and not just equate “everything” simply with big and comprehensive.

Read more

Living cells work better than dying cells, right? However, this is not always the case: your cells often sacrifice themselves to keep you healthy[1]. The unsung hero of life is death.

While death may seem passive, an unfortunate ending that just “happens,” the death of your cells is often extremely purposeful and strategic. The intricate details of how and why cells die can have significant effects on your overall health.

There are over 10 different ways cells can “decide” to die, each serving a particular purpose for the organism. My own research[2] explores how immune cells switch between different types of programmed death in scenarios like cancer or injury.

Programmed cell death can be broadly divided into two types[3] that are crucial to health: silent and inflammatory.

Quietly exiting: silent cell death

Cells can often become damaged because of age, stress or injury, and these abnormal cells can make you sick[4]. Your body runs a tight ship, and when cells step out of line, they must be quietly eliminated before they overgrow into tumors or cause unnecessary inflammation[5] where your immune system is activated and causes fever, swelling, redness and pain.

Your body swaps out cells every day[6] to ensure that your tissues are made up of healthy, functioning ones. The parts of your body that are more likely to see damage, like your skin and gut, turn over cells weekly, while other cell types can take months to years to recycle. Regardless of the timeline, the death of old and damaged cells and their replacement with new cells is a normal and important bodily process.

Silent cell death, or apoptosis[7], is described as silent because these cells die without causing an inflammatory reaction. Apoptosis is an active process involving many proteins and switches within the cell. It’s designed to strategically eliminate cells without alarming the rest of the body.

Sometimes cells can detect that their own functions are failing and turn on executioner proteins[8] that chop up their own DNA, and they quietly die by apoptosis. Alternatively, healthy cells can order overactive or damaged neighbor cells to activate their executioner proteins.

Apoptosis is important to maintaining a healthy body. In fact, you can thank apoptosis for your fingers and toes[9]. Fetuses initially have webbed fingers until the cells that form the tissue between them undergo apoptosis and die off.

Microscopy image of mouse foot at embryonic stage
The toes of this embryonic mouse foot are forming through apoptosis. Michal Maňas/Wikimedia Commons[10], CC BY-SA[11]

Without apoptosis, cells can grow out of control. A well-studied example of this is cancer. Cancer cells are abnormally good at growing and dividing, and those that can resist apoptosis[12] form very aggressive tumors. Understanding how apoptosis works and why cancer cells can disrupt it can potentially improve cancer treatments.

Other conditions can benefit from apoptosis research as well. Your body makes a lot of immune cells that all respond to different targets, and occasionally one of these cells can accidentally target your own tissues. Apoptosis is a crucial way your body can eliminate these immune cells before they cause unnecessary damage. When apoptosis fails to eliminate these cells, sometimes because of genetic abnormalities, this can lead to autoimmune diseases[13] like lupus.

Another example of the role apoptosis plays in health is endometriosis[14], an understudied disease caused by the overgrowth of tissue in the uterus. It can be extremely painful and debilitating for patients. Researchers have recently linked this out-of-control growth in the uterus[15] to dysfunctional apoptosis.

Whether it’s for development or maintenance, your cells are quietly exiting to keep your body happy and healthy.

Going out with a bang: inflammatory cell death

Sometimes, it is in your body’s best interest for cells to raise an alarm as they die. This can be beneficial when cells detect the presence of an infection and need to eliminate themselves as a target while also alerting the rest of the body. This inflammatory cell death[16] is typically triggered by bacteria, viruses or stress.

Rather than quietly shutting down, cells undergoing inflammatory cell death will make themselves burst, or lyse, killing themselves and exploding inflammatory messengers as they go. These messengers tell your immune cells that there is a threat and prompts them to treat and fight the pathogen.

An inflammatory death would not be healthy for maintenance. If the normal recycling of your skin or gut cells caused an inflammatory reaction, you would feel sick a lot. This is why inflammatory death is tightly controlled[17] and requires multiple signals to initiate.

Despite the riskiness of this grenadelike death, many infections would be impossible to fight without it. Many bacteria and viruses need to live around or inside your cells to survive. When specialized sensors on your cells detect these threats, they can simultaneously activate your immune system and remove themselves as a home for pathogens. Researchers call this eliminating the niche[18] of the pathogen.

Cells die in many ways, including lysis.

Inflammatory cell death plays a major role in pandemics. Yersinia pestis[19], the bacteria behind the Black Death, has evolved various ways of stopping human immune cells from mounting a response. However, immune cells developed the ability to sense this trickery and die an inflammatory death. This ensures that additional immune cells will infiltrate and eliminate the bacteria despite the bacteria’s best attempts to prevent a fight.

Although the Black Death is not as common nowadays, close relatives Yersinia pseudotuberculosis and Yersinia enterocolitica are behind outbreaks of food-borne illnesses[20]. These infections are rarely fatal because your immune cells can aggressively eliminate the pathogen’s niche by inducing inflammatory cell death. For this reason, however, Yersinia infection can be more dangerous in immunocompromised people.

The virus behind the COVID-19 pandemic[21] also causes a lot of inflammatory cell death. Studies show that without cell death the virus would freely live inside your cells and multiply. However, this inflammatory cell death can sometimes get out of control and contribute to the lung damage[22] seen in COVID-19 patients, which can greatly affect survival. Researchers are still studying the role of inflammatory cell death in COVID-19 infection, and understanding this delicate balance can help improve treatments.

In good times and bad, your cells are always ready to sacrifice themselves to keep you healthy. You can thank cell death for keeping you alive.

Read more

A group of three researchers earned the 2023 Nobel Prize in physics[1] for work that has revolutionized how scientists study the electron – by illuminating molecules with attosecond-long flashes of light. But how long is an attosecond, and what can these infinitesimally short pulses tell researchers about the nature of matter?

I first learned[2] of this area of research as a graduate student in physical chemistry. My doctoral adviser’s group had a project dedicated to studying chemical reactions with attosecond pulses[3]. Before understanding why attosecond research resulted in the most prestigious award in the sciences, it helps to understand what an attosecond pulse of light is.

How long is an attosecond?

“Atto” is the scientific notation prefix[4] that represents 10⁻¹⁸, which is a decimal point followed by 17 zeroes and a 1. So a flash of light lasting an attosecond, or 0.000000000000000001 of a second, is an extremely short pulse of light.

In fact, there are approximately as many attoseconds in one second as there are seconds in the age of the universe[5].

A diagram showing an attosecond, depicted as an orange collection of hexagons, on the left, with the age of the universe, depicted as a dark vacuum on the right, and a heartbeat, depicted as a human heart, in the middle.
An attosecond is incredibly small when compared to a second. ©Johan Jarnestad/The Royal Swedish Academy of Sciences[6], CC BY-NC-ND[7]

Previously, scientists could study the motion of heavier and slower-moving atomic nuclei with femtosecond (10⁻¹⁵) light pulses[8]. One thousand attoseconds are in 1 femtosecond. But researchers couldn’t see movement on the electron scale until they could generate attosecond light pulses – electrons move too fast for scientists to parse exactly what they are up to at the femtosecond level.

Attosecond pulses

The rearrangement of electrons in atoms and molecules guides a lot of processes in physics, and it underlies practically every part of chemistry. Therefore, researchers have put a lot of effort into figuring out how electrons are moving and rearranging.

However, electrons move around very rapidly in physical and chemical processes, making them difficult to study. To investigate these processes, scientists use spectroscopy[9], a method of examining how matter absorbs or emits light. In order to follow the electrons in real time[10], researchers need a pulse of light that is shorter than the time it takes for electrons to rearrange.

Pump-probe spectroscopy is a common technique in physics and chemistry and can be performed with attosecond light pulses.

As an analogy, imagine a camera that could only take longer exposures, around 1 second long. Things in motion, like a person running toward the camera or a bird flying across the sky, would appear blurry in the photos taken, and it would be difficult to see exactly what was going on.

Then, imagine you use a camera with a 1 millisecond exposure. Now, motions that were previously smeared out would be nicely resolved into clear and precise snapshots. That’s how using the attosecond scale, rather than the femtosecond scale, can illuminate electron behavior.

Attosecond research

So what kind of research questions can attosecond pulses help answer?

For one, breaking a chemical bond is a fundamental process in nature where electrons that are shared between two atoms separate out into unbound atoms. The previously shared electrons undergo ultrafast changes during this process, and attosecond pulses[11] made it possible for researchers to follow the real-time breaking of a chemical bond.

The ability to generate attosecond pulses[12] – the research for which three researchers earned the 2023 Nobel Prize in physics[13] – first became possible in the early 2000s, and the field has continued to grow rapidly[14] since. By providing shorter snapshots of atoms and molecules, attosecond spectroscopy has helped researchers understand electron behavior in single molecules, such as how electron charge migrates[15] and how chemical bonds[16] between atoms break.

On a larger scale, attosecond technology has also been applied to studying how electrons behave in liquid water[17] as well as electron transfer in solid-state semiconductors[18]. As researchers continue to improve their ability to produce attosecond light pulses, they’ll gain a deeper understanding of the basic particles that make up matter.

Read more

a smartphone screen displays an alert

The Wireless Emergency Alert system[1] is scheduled to have its third nationwide test[2] on Oct. 4, 2023. The Wireless Emergency Alert system is a public safety system that allows authorities to alert people via their mobile devices of dangerous weather, missing children and other situations requiring public attention.

Similar tests in 2018 and 2021 caused a degree of public confusion[3] and resistance[4]. In addition, there was confusion around the first test of the U.K. system in April 2023[5], and an outcry surrounding accidental alert messages such as those sent in Hawaii in January 2018[6] and in Florida in April 2023[7].

The federal government lists five types of emergency alerts[8]: National (formerly labeled Presidential), Imminent Threat, Public Safety, America’s Missing: Broadcast Emergency Response (Amber), and Opt-in Test Messages. You can opt out of any except National Alerts, which are reserved for national emergencies. The Oct. 4 test is a National Alert.

We are a media studies researcher[9] and a communications researcher[10] who study emergency alert systems. We believe that concerns about previous tests raise two questions: Is public trust in emergency alerting eroding? And how might the upcoming test rebuild it?

Confusion and resistance

In an ever-updating digital media environment, emergency alerts appear as part of a constant stream of updates, buzzes, reminders and notifications on people’s smartphones. Over-alerting is a common fear in emergency management circles[11] because it can lead people to ignore alerts and not take needed action. The sheer volume of different updates can be similarly overwhelming, burying emergency alerts in countless other messages. Many people have even opted out of alerts[12] when possible, rummaging through settings and toggling off every alert they can find.

Even when people receive alerts, however, there is potential for confusion and rejection. All forms of emergency alerts rely on the recipients’ trust[13] in the people or organization responsible for the alert. But it’s not always clear who the sender is. As one emergency manager explained to one of us regarding alerts used during COVID-19: “People were more confused because they got so many different notifications, especially when they don’t say who they’re from.”

When the origin of an alert is unclear, or the recipient perceives it to have a political bias counter to their own views[14], people may become confused or resistant to the message. Prior tests and use of the Wireless Emergency Alert system have indicated strong anti-authority attitudes, particularly following the much-derided 2018 test of what was then called the Presidential Alert message class[15]. There are already conspiracy theories[16] online about the upcoming test.

People receive mobile alerts from then-president Donald Trump in a ‘Saturday Night Live’ sketch aired on Oct. 6, 2018.

Trust in alerts is further reduced by the overall lack of testing and public awareness work done on behalf of the Wireless Emergency Alert system since its launch in June 2012[17]. As warning expert Dennis Mileti explained in his 2018 Federal Emergency Management Agency PrepTalk[18], routine public tests are essential for warning systems’ effectiveness. However, the Wireless Emergency Alert system has been tested at the national level only twice, and there has been little public outreach to explain the system by either the government or technology companies.

More exposure and info leads to more trust

The upcoming nationwide test may offer a moment that could rebuild trust in the system. A survey administered in the days immediately following the 2021 national test found that more respondents believed that the National Alert message class label would signal more trustworthy information[19] than the Presidential Alert message class label.

Similarly, in contrast to the 2021 test, which targeted only select users, the Oct. 4 test is slated to reach all compatible devices in the U.S. Since users cannot opt out of the National Alert message class, this week’s test is a powerful opportunity to build awareness about the potential benefits of a functional federal emergency alert system.

The Oct. 4 test message is expected to state, “THIS IS A TEST of the National Wireless Emergency Alert system. No action is needed.” We instead suggest that action is, in fact, urgently needed to help people better understand the rapidly changing mobile alert and warning ecosystem that confronts them. Familiarity with this system is what will allow it to support public health and safety, and address the crises of the 21st century.

Here are steps that you can take now to help make the Wireless Emergency Alert system more effective:

  • The Wireless Emergency Alert system is only one form of emergency alert. Identify which mobile notification systems are used by your local emergency management organizations: police, fire and emergency services. Know which systems are opt-in and opt-out, and opt in to those needed. Ensure access to other sources of information during an emergency, such as local radio and television, or National Oceanic and Atmospheric Administration weather radio.

  • Understand the meaning of mobile device notification settings. Just because you are opted in to “Emergency Alerts” on your cellphone does not necessarily mean you are signed up to receive notifications from local authorities. Check the FEMA website[20] for information about the Wireless Emergency Alert system and your local emergency management organizations’ websites about opt-in systems.

  • Have a plan for contacting family, friends and neighbors during an emergency. Decide in advance who will help the vulnerable members of your community.

  • Find out if your local emergency management organizations test their alert systems, and make sure to receive those local tests.

  • Anticipate the possibility that mobile systems will be damaged or unavailable during a crisis and prepare essentials for sheltering in place or quick evacuation.

Finally, push back on the lack of information and rise of misinformation about alerts by sharing reliable information about emergency alerts with your family and friends.

Read more

On April 8, 1911, Dutch physicist Heike Kamerlingh Onnes[1] scribbled in pencil an almost unintelligible note into a kitchen notebook[2]: “near enough null.”

The note referred to the electrical resistance he’d measured during a landmark experiment that would later be credited as the discovery of superconductivity. But first, he and his team would need many more trials to confirm the measurement.

Their discovery opened up a world of potential scientific applications. The century since has seen many advances, but superconductivity researchers today can take lessons from Onnes’ original, Nobel Prize-winning work[3].

I have always been interested in origin stories. As a physics professor and the author of books on the history of physics[4], I look for the interesting backstory – the twists, turns and serendipities that lie behind great discoveries.

The true stories behind these discoveries are usually more chaotic than the rehearsed narratives crafted after the fact, and some of the lessons learned from Onnes’ experiments remain relevant today as researchers search for new superconductors that might, one day, operate near room temperature.

Superconductivity

A rare quantum effect that allows electrical currents to flow without resistance in superconducting wires, superconductivity allows for[5] a myriad of scientific applications. These include MRI machines[6] and powerful particle accelerators[7].

Imagine giving a single push to a row of glass beads strung on a frictionless wire. Once the beads start moving down the wire, they never stop, like a perpetual motion[8] machine. That’s the idea behind superconductivity – particles flowing without resistance.

Superconductivity happens when a current experiences no electrical resistance.

For superconductors to work, they need to be cooled to ultra-low temperatures colder than any Arctic blast. That’s how Onnes’ original work cooling helium to near absolute zero temperature[9] set the stage for his unexpected discovery of superconductivity.

The discovery

Onnes[10], a physics professor at the University of Leiden in the Netherlands, built the leading low-temperature physics laboratory in the world in the first decade of the 20th century.

His lab[11] was the first to turn helium from a gas to a liquid by making the gas expand and cool. His lab managed to cool helium this way to a temperature of -452 degrees Farenheit (-269 degrees Celsius).

Onnes then began studying the electrical conductivity of metals at these cold temperatures. He started with mercury because mercury in liquid form can conduct electricity, making it easy to fill into glass tubes. At low temperatures, the mercury would freeze solid, creating metallic wires that Onnes could use in his conductivity experiments.

On April 8, 1911, his lab technicians transferred liquid helium into a measurement cryostat – a glass container with a vacuum jacket to insulate it from the room’s heat. They cooled the helium to -454 F (-270 C) and then measured the electrical resistance of the mercury wire by sending a small current through it and measuring the voltage.

It was then that Onnes wrote the cryptic “near enough null” measurement into his kitchen notebook[12], meaning that the wire was conducting electricity without any measurable resistance.

That date of April 8 is often quoted as the discovery of superconductivity, but the full story isn’t so simple, because scientists can’t accept a scribbled “near-enough null” as sufficient proof of a new discovery.

In pursuit of proof

Onnes’ team performed its next experiment more than six weeks later[13], on May 23. On this day, they cooled the cryostat again to -454 F (-270 C) and then let the temperature slowly rise.

At first they barely measured any electrical resistance, indicating superconductivity. The resistance stayed small up to -452 F, when it suddenly rose by over a factor of 400 as the temperature inched up just a fraction of a degree.

The rise was so rapid and so unexpected that they started searching for some form of electrical fault or open circuit that might have been caused by the temperature shifts. But they couldn’t find anything wrong. They spent five more months improving their system before trying again. On Oct. 26 they repeated the experiment, capturing the earlier sudden rise in resistance.

A graph with the resistence of Mercury on the y axis and temperature on the x axis, showing a sharp drop.
The resistance of mercury as recorded on Oct. 26, 1911, by Onnes’ lab. Heike Kamerlingh Onnes via Wikimedia Commons[14]

One week later, Onnes presented these results to the first Solvay Conference[15], and two years later he received his Nobel Prize in physics, recognizing his low-temperature work generally but not superconductivity specifically.

It took another three years of diligent work before Onnes had his irrefutable evidence: He measured persistent currents that did not decay, demonstrating truly zero resistance and superconductivity on April 24, 1914.

New frontiers for critical temperatures

In the decades following Onnes’ discovery, many researchers have explored[16] how metals act at supercooled temperatures and have learned more about superconductivity.

But if researchers can observe superconductivity only at super low temperatures, it’s hard to make anything useful. It is too expensive to operate a machine practically if it works only at -400 F (-240 C).

So, scientists began searching for superconductors that can work at practical temperatures. For instance, K. Alex Müller and J. Georg Bednorz at the IBM research laboratory[17] in Switzerland figured out that metal oxides[18] like lanthanum-barium-copper oxide, known as LBCO, could be good candidates[19].

It took the IBM team about three years to find superconductivity in LBCO. But when they did, their work set a new record[20], with superconductivity observed at -397 F (-238 C) in 1986.

A year later, in 1987, a lab in Houston replaced lanthanum in LBCO with the element yttrium to create YBCO. They demonstrated superconductivity at -292 F[21]. This discovery made YBCO the first practical superconductor, because it could work while immersed in inexpensive liquid nitrogen.

Since then, researchers have observed superconductivity at temperatures as high as -164 F[22] (-109 C), but achieving a room-temperature superconductor has remained elusive.

Chart of the discoveries of new superconductors plotted as critical temperature versus year of discovery, with each discovery labeled with a shape, color and abbreviation.
Timeline of accomplishments in superconductivity research. Gingras.ol/Wikimedia Commons[23], CC BY-NC-SA[24]

In 2023, two groups claimed they had evidence for room-temperature superconductivity, though both reports have been met with sharp skepticism[25], and both are now in limbo following further scrutiny.

Superconductivity has always been tricky to prove because some metals can masquerade as superconductors. The lessons learned by Onnes a century ago – that these discoveries require time, patience and, most importantly, proof of currents that never stop – are still relevant today.

Read more

The 2023 Nobel Prize in physiology or medicine[1] will go to Katalin Karikó and Drew Weissman for their discovery that modifying mRNA[2] – a form of genetic material your body uses to produce proteins – could reduce unwanted inflammatory responses and allow it to be delivered into cells. While the impact of their findings may not have been apparent at the time of their breakthrough over a decade ago, their work paved the way for the development of the Pfizer-BioNTech and Moderna COVID-19 vaccines[3], as well as many other therapeutic applications currently in development.

We asked André O. Hudson, a biochemist and microbiologist[4] at the Rochester Institute of Technology, to explain how basic research like that of this year’s Nobel Prize winners provides the foundations for science – even when its far-reaching effects won’t be felt until years later.

What is basic science?

Basic research[5], sometimes called fundamental research, is a type of investigation with the overarching goal of understanding natural phenomena like how cells work or how birds can fly. Scientists are asking the fundamental questions of how, why, when, where and if in order to bridge a gap in curiosity and understanding about the natural world.

Researchers sometimes conduct basic research with the hope of eventually developing a technology or drug based on that work. But what many scientists typically do in academia is ask fundamental questions with answers that may or may not ever lead to practical applications.

Humans, and the animal kingdom as a whole, are wired to be curious[6]. Basic research scratches that itch.

What are some basic science discoveries that went on to have a big influence on medicine?

The 2023 Nobel Prize in physiology or medicine[7] acknowledges basic science work done in the early 2000s. Karikó and Weissman’s discovery about modifying mRNA to reduce the body’s inflammatory response to it allowed other researchers to leverage it to make improved vaccines.

Another example is the discovery of antibiotics[8], which was based on an unexpected observation. In the late 1920s, the microbiologist Alexander Fleming was growing a species of bacteria in his lab and found that his Petri dish was accidentally contaminated with the fungus Penicillium notatum. He noticed that wherever the fungus was growing, it impeded or inhibited the growth of the bacteria. He wondered why that was happening and subsequently went on to isolate penicillin, which was approved for medical use in the early 1940s.

This work fed into more questions that ushered in the age of antibiotics. The 1952 Nobel Prize in physiology or medicine was awarded to Selman Waksman for his discovery of streptomycin[9], the first antibiotic to treat tuberculosis.

Penicillin was discovered by accident.

Basic research often involves seeing something surprising, wanting to understand why and deciding to investigate further. Early discoveries start from a basic observation, asking the simple question of “How?” Only later are they parlayed into a medical technology that helps humanity.

Why does it take so long to get from curiosity-driven basic science to a new product or technology?

The mRNA modification discovery could be considered to be on a relatively fast track from basic science to application. Less than 15 years passed between Karikó and Weissman’s findings and the COVID-19 vaccines. The importance of their discovery came to the forefront with the pandemic and the millions of lives[10] they saved.

Most basic research won’t reach the market until several decades[11] after its initial publication in a science journal. One reason is because it depends on need. For example, orphan diseases[12] that affect only a small number of people will get less attention and funding than conditions that are ubiquitous in a population, like cancer or diabetes. Companies don’t want to spend billions of dollars developing a drug that will only have a small return on their investment. Likewise, because the return on investment for basic research often isn’t clear, it can be a hard sell to support financially.

Another reason is cultural. Scientists are trained to chase after funding and support for their work wherever they can find it. But sometimes that’s not as easy as it seems.

A good example of this was when the human genome was first sequenced[13] in the early 2000s. A lot of people thought that having access to the full sequence would lead to treatments and cures for many different diseases. But that has not been the case[14], because there are many nuances to translating basic research to the clinic. What works in a cell or an animal might not translate into people. There are many steps and layers in the process to get there.

Why is basic science important?

For me, the most critical reason is that basic research is how we train and mentor future scientists[15].

In an academic setting, telling students “Let’s go develop an mRNA vaccine” versus “How does mRNA work in the body” influences how they approach science. How do they design experiments? Do they start the study going forward or backward? Are they argumentative or cautious in how they present their findings?

Close-up of scientist wearing nitrile gloves looking into microscope hovering over Petri dish
There are many steps between translating findings in a lab to the clinic. Marco VDM/E+ via Getty Images[16]

Almost every scientist is trained under a basic research umbrella of how to ask questions and go through the scientific method. You need to understand how, when and where mRNAs are modified before you can even begin to develop an mRNA vaccine. I believe the best way to inspire future scientists is to encourage them to expand on their curiosity in order to make a difference.

When I was writing my dissertation, I was relying on studies that were published in the late 1800s and early 1900s. Many of these studies are still cited in scientific articles today. When researchers share their work, though it may not be today or tomorrow, or 10 to 20 years from now, it will be of use to someone else in the future. You’ll make a future scientist’s job a little bit easier, and I believe that’s a great legacy to have.

What is a common misconception about basic science?

Because any immediate use for basic science can be very hard to see, it’s easy to think this kind of research is a waste of money or time[17]. Why are scientists breeding mosquitoes in these labs? Or why are researchers studying migratory birds? The same argument has been made with astronomy. Why are we spending billions of dollars putting things into space? Why are we looking to the edge of the universe and studying stars when they are millions and billions of light years away? How does it affect us?

There is a need for more scientific literacy[18] because not having it can make it difficult to understand why basic research is necessary to future breakthroughs that will have a major effect on society.

In the short term, the worth of basic research can be hard to see. But in the long term, history has shown that a lot of what we take for granted now, such as common medical equipment like X-rays[19], lasers[20] and MRIs[21], came from basic things people discovered in the lab.

And it still goes down to the fundamental questions – we’re a species that seeks answers to things we don’t know. As long as curiosity is a part of humanity, we’re always going to be seeking answers.

Read more

More Articles …