A group of three researchers earned the 2023 Nobel Prize in physics[1] for work that has revolutionized how scientists study the electron – by illuminating molecules with attosecond-long flashes of light. But how long is an attosecond, and what can these infinitesimally short pulses tell researchers about the nature of matter?

I first learned[2] of this area of research as a graduate student in physical chemistry. My doctoral adviser’s group had a project dedicated to studying chemical reactions with attosecond pulses[3]. Before understanding why attosecond research resulted in the most prestigious award in the sciences, it helps to understand what an attosecond pulse of light is.

How long is an attosecond?

“Atto” is the scientific notation prefix[4] that represents 10⁻¹⁸, which is a decimal point followed by 17 zeroes and a 1. So a flash of light lasting an attosecond, or 0.000000000000000001 of a second, is an extremely short pulse of light.

In fact, there are approximately as many attoseconds in one second as there are seconds in the age of the universe[5].

A diagram showing an attosecond, depicted as an orange collection of hexagons, on the left, with the age of the universe, depicted as a dark vacuum on the right, and a heartbeat, depicted as a human heart, in the middle.
An attosecond is incredibly small when compared to a second. ©Johan Jarnestad/The Royal Swedish Academy of Sciences[6], CC BY-NC-ND[7]

Previously, scientists could study the motion of heavier and slower-moving atomic nuclei with femtosecond (10⁻¹⁵) light pulses[8]. One thousand attoseconds are in 1 femtosecond. But researchers couldn’t see movement on the electron scale until they could generate attosecond light pulses – electrons move too fast for scientists to parse exactly what they are up to at the femtosecond level.

Attosecond pulses

The rearrangement of electrons in atoms and molecules guides a lot of processes in physics, and it underlies practically every part of chemistry. Therefore, researchers have put a lot of effort into figuring out how electrons are moving and rearranging.

However, electrons move around very rapidly in physical and chemical processes, making them difficult to study. To investigate these processes, scientists use spectroscopy[9], a method of examining how matter absorbs or emits light. In order to follow the electrons in real time[10], researchers need a pulse of light that is shorter than the time it takes for electrons to rearrange.

Pump-probe spectroscopy is a common technique in physics and chemistry and can be performed with attosecond light pulses.

As an analogy, imagine a camera that could only take longer exposures, around 1 second long. Things in motion, like a person running toward the camera or a bird flying across the sky, would appear blurry in the photos taken, and it would be difficult to see exactly what was going on.

Then, imagine you use a camera with a 1 millisecond exposure. Now, motions that were previously smeared out would be nicely resolved into clear and precise snapshots. That’s how using the attosecond scale, rather than the femtosecond scale, can illuminate electron behavior.

Attosecond research

So what kind of research questions can attosecond pulses help answer?

For one, breaking a chemical bond is a fundamental process in nature where electrons that are shared between two atoms separate out into unbound atoms. The previously shared electrons undergo ultrafast changes during this process, and attosecond pulses[11] made it possible for researchers to follow the real-time breaking of a chemical bond.

The ability to generate attosecond pulses[12] – the research for which three researchers earned the 2023 Nobel Prize in physics[13] – first became possible in the early 2000s, and the field has continued to grow rapidly[14] since. By providing shorter snapshots of atoms and molecules, attosecond spectroscopy has helped researchers understand electron behavior in single molecules, such as how electron charge migrates[15] and how chemical bonds[16] between atoms break.

On a larger scale, attosecond technology has also been applied to studying how electrons behave in liquid water[17] as well as electron transfer in solid-state semiconductors[18]. As researchers continue to improve their ability to produce attosecond light pulses, they’ll gain a deeper understanding of the basic particles that make up matter.

Read more

a smartphone screen displays an alert

The Wireless Emergency Alert system[1] is scheduled to have its third nationwide test[2] on Oct. 4, 2023. The Wireless Emergency Alert system is a public safety system that allows authorities to alert people via their mobile devices of dangerous weather, missing children and other situations requiring public attention.

Similar tests in 2018 and 2021 caused a degree of public confusion[3] and resistance[4]. In addition, there was confusion around the first test of the U.K. system in April 2023[5], and an outcry surrounding accidental alert messages such as those sent in Hawaii in January 2018[6] and in Florida in April 2023[7].

The federal government lists five types of emergency alerts[8]: National (formerly labeled Presidential), Imminent Threat, Public Safety, America’s Missing: Broadcast Emergency Response (Amber), and Opt-in Test Messages. You can opt out of any except National Alerts, which are reserved for national emergencies. The Oct. 4 test is a National Alert.

We are a media studies researcher[9] and a communications researcher[10] who study emergency alert systems. We believe that concerns about previous tests raise two questions: Is public trust in emergency alerting eroding? And how might the upcoming test rebuild it?

Confusion and resistance

In an ever-updating digital media environment, emergency alerts appear as part of a constant stream of updates, buzzes, reminders and notifications on people’s smartphones. Over-alerting is a common fear in emergency management circles[11] because it can lead people to ignore alerts and not take needed action. The sheer volume of different updates can be similarly overwhelming, burying emergency alerts in countless other messages. Many people have even opted out of alerts[12] when possible, rummaging through settings and toggling off every alert they can find.

Even when people receive alerts, however, there is potential for confusion and rejection. All forms of emergency alerts rely on the recipients’ trust[13] in the people or organization responsible for the alert. But it’s not always clear who the sender is. As one emergency manager explained to one of us regarding alerts used during COVID-19: “People were more confused because they got so many different notifications, especially when they don’t say who they’re from.”

When the origin of an alert is unclear, or the recipient perceives it to have a political bias counter to their own views[14], people may become confused or resistant to the message. Prior tests and use of the Wireless Emergency Alert system have indicated strong anti-authority attitudes, particularly following the much-derided 2018 test of what was then called the Presidential Alert message class[15]. There are already conspiracy theories[16] online about the upcoming test.

People receive mobile alerts from then-president Donald Trump in a ‘Saturday Night Live’ sketch aired on Oct. 6, 2018.

Trust in alerts is further reduced by the overall lack of testing and public awareness work done on behalf of the Wireless Emergency Alert system since its launch in June 2012[17]. As warning expert Dennis Mileti explained in his 2018 Federal Emergency Management Agency PrepTalk[18], routine public tests are essential for warning systems’ effectiveness. However, the Wireless Emergency Alert system has been tested at the national level only twice, and there has been little public outreach to explain the system by either the government or technology companies.

More exposure and info leads to more trust

The upcoming nationwide test may offer a moment that could rebuild trust in the system. A survey administered in the days immediately following the 2021 national test found that more respondents believed that the National Alert message class label would signal more trustworthy information[19] than the Presidential Alert message class label.

Similarly, in contrast to the 2021 test, which targeted only select users, the Oct. 4 test is slated to reach all compatible devices in the U.S. Since users cannot opt out of the National Alert message class, this week’s test is a powerful opportunity to build awareness about the potential benefits of a functional federal emergency alert system.

The Oct. 4 test message is expected to state, “THIS IS A TEST of the National Wireless Emergency Alert system. No action is needed.” We instead suggest that action is, in fact, urgently needed to help people better understand the rapidly changing mobile alert and warning ecosystem that confronts them. Familiarity with this system is what will allow it to support public health and safety, and address the crises of the 21st century.

Here are steps that you can take now to help make the Wireless Emergency Alert system more effective:

  • The Wireless Emergency Alert system is only one form of emergency alert. Identify which mobile notification systems are used by your local emergency management organizations: police, fire and emergency services. Know which systems are opt-in and opt-out, and opt in to those needed. Ensure access to other sources of information during an emergency, such as local radio and television, or National Oceanic and Atmospheric Administration weather radio.

  • Understand the meaning of mobile device notification settings. Just because you are opted in to “Emergency Alerts” on your cellphone does not necessarily mean you are signed up to receive notifications from local authorities. Check the FEMA website[20] for information about the Wireless Emergency Alert system and your local emergency management organizations’ websites about opt-in systems.

  • Have a plan for contacting family, friends and neighbors during an emergency. Decide in advance who will help the vulnerable members of your community.

  • Find out if your local emergency management organizations test their alert systems, and make sure to receive those local tests.

  • Anticipate the possibility that mobile systems will be damaged or unavailable during a crisis and prepare essentials for sheltering in place or quick evacuation.

Finally, push back on the lack of information and rise of misinformation about alerts by sharing reliable information about emergency alerts with your family and friends.

Read more

On April 8, 1911, Dutch physicist Heike Kamerlingh Onnes[1] scribbled in pencil an almost unintelligible note into a kitchen notebook[2]: “near enough null.”

The note referred to the electrical resistance he’d measured during a landmark experiment that would later be credited as the discovery of superconductivity. But first, he and his team would need many more trials to confirm the measurement.

Their discovery opened up a world of potential scientific applications. The century since has seen many advances, but superconductivity researchers today can take lessons from Onnes’ original, Nobel Prize-winning work[3].

I have always been interested in origin stories. As a physics professor and the author of books on the history of physics[4], I look for the interesting backstory – the twists, turns and serendipities that lie behind great discoveries.

The true stories behind these discoveries are usually more chaotic than the rehearsed narratives crafted after the fact, and some of the lessons learned from Onnes’ experiments remain relevant today as researchers search for new superconductors that might, one day, operate near room temperature.

Superconductivity

A rare quantum effect that allows electrical currents to flow without resistance in superconducting wires, superconductivity allows for[5] a myriad of scientific applications. These include MRI machines[6] and powerful particle accelerators[7].

Imagine giving a single push to a row of glass beads strung on a frictionless wire. Once the beads start moving down the wire, they never stop, like a perpetual motion[8] machine. That’s the idea behind superconductivity – particles flowing without resistance.

Superconductivity happens when a current experiences no electrical resistance.

For superconductors to work, they need to be cooled to ultra-low temperatures colder than any Arctic blast. That’s how Onnes’ original work cooling helium to near absolute zero temperature[9] set the stage for his unexpected discovery of superconductivity.

The discovery

Onnes[10], a physics professor at the University of Leiden in the Netherlands, built the leading low-temperature physics laboratory in the world in the first decade of the 20th century.

His lab[11] was the first to turn helium from a gas to a liquid by making the gas expand and cool. His lab managed to cool helium this way to a temperature of -452 degrees Farenheit (-269 degrees Celsius).

Onnes then began studying the electrical conductivity of metals at these cold temperatures. He started with mercury because mercury in liquid form can conduct electricity, making it easy to fill into glass tubes. At low temperatures, the mercury would freeze solid, creating metallic wires that Onnes could use in his conductivity experiments.

On April 8, 1911, his lab technicians transferred liquid helium into a measurement cryostat – a glass container with a vacuum jacket to insulate it from the room’s heat. They cooled the helium to -454 F (-270 C) and then measured the electrical resistance of the mercury wire by sending a small current through it and measuring the voltage.

It was then that Onnes wrote the cryptic “near enough null” measurement into his kitchen notebook[12], meaning that the wire was conducting electricity without any measurable resistance.

That date of April 8 is often quoted as the discovery of superconductivity, but the full story isn’t so simple, because scientists can’t accept a scribbled “near-enough null” as sufficient proof of a new discovery.

In pursuit of proof

Onnes’ team performed its next experiment more than six weeks later[13], on May 23. On this day, they cooled the cryostat again to -454 F (-270 C) and then let the temperature slowly rise.

At first they barely measured any electrical resistance, indicating superconductivity. The resistance stayed small up to -452 F, when it suddenly rose by over a factor of 400 as the temperature inched up just a fraction of a degree.

The rise was so rapid and so unexpected that they started searching for some form of electrical fault or open circuit that might have been caused by the temperature shifts. But they couldn’t find anything wrong. They spent five more months improving their system before trying again. On Oct. 26 they repeated the experiment, capturing the earlier sudden rise in resistance.

A graph with the resistence of Mercury on the y axis and temperature on the x axis, showing a sharp drop.
The resistance of mercury as recorded on Oct. 26, 1911, by Onnes’ lab. Heike Kamerlingh Onnes via Wikimedia Commons[14]

One week later, Onnes presented these results to the first Solvay Conference[15], and two years later he received his Nobel Prize in physics, recognizing his low-temperature work generally but not superconductivity specifically.

It took another three years of diligent work before Onnes had his irrefutable evidence: He measured persistent currents that did not decay, demonstrating truly zero resistance and superconductivity on April 24, 1914.

New frontiers for critical temperatures

In the decades following Onnes’ discovery, many researchers have explored[16] how metals act at supercooled temperatures and have learned more about superconductivity.

But if researchers can observe superconductivity only at super low temperatures, it’s hard to make anything useful. It is too expensive to operate a machine practically if it works only at -400 F (-240 C).

So, scientists began searching for superconductors that can work at practical temperatures. For instance, K. Alex Müller and J. Georg Bednorz at the IBM research laboratory[17] in Switzerland figured out that metal oxides[18] like lanthanum-barium-copper oxide, known as LBCO, could be good candidates[19].

It took the IBM team about three years to find superconductivity in LBCO. But when they did, their work set a new record[20], with superconductivity observed at -397 F (-238 C) in 1986.

A year later, in 1987, a lab in Houston replaced lanthanum in LBCO with the element yttrium to create YBCO. They demonstrated superconductivity at -292 F[21]. This discovery made YBCO the first practical superconductor, because it could work while immersed in inexpensive liquid nitrogen.

Since then, researchers have observed superconductivity at temperatures as high as -164 F[22] (-109 C), but achieving a room-temperature superconductor has remained elusive.

Chart of the discoveries of new superconductors plotted as critical temperature versus year of discovery, with each discovery labeled with a shape, color and abbreviation.
Timeline of accomplishments in superconductivity research. Gingras.ol/Wikimedia Commons[23], CC BY-NC-SA[24]

In 2023, two groups claimed they had evidence for room-temperature superconductivity, though both reports have been met with sharp skepticism[25], and both are now in limbo following further scrutiny.

Superconductivity has always been tricky to prove because some metals can masquerade as superconductors. The lessons learned by Onnes a century ago – that these discoveries require time, patience and, most importantly, proof of currents that never stop – are still relevant today.

Read more

The 2023 Nobel Prize in physiology or medicine[1] will go to Katalin Karikó and Drew Weissman for their discovery that modifying mRNA[2] – a form of genetic material your body uses to produce proteins – could reduce unwanted inflammatory responses and allow it to be delivered into cells. While the impact of their findings may not have been apparent at the time of their breakthrough over a decade ago, their work paved the way for the development of the Pfizer-BioNTech and Moderna COVID-19 vaccines[3], as well as many other therapeutic applications currently in development.

We asked André O. Hudson, a biochemist and microbiologist[4] at the Rochester Institute of Technology, to explain how basic research like that of this year’s Nobel Prize winners provides the foundations for science – even when its far-reaching effects won’t be felt until years later.

What is basic science?

Basic research[5], sometimes called fundamental research, is a type of investigation with the overarching goal of understanding natural phenomena like how cells work or how birds can fly. Scientists are asking the fundamental questions of how, why, when, where and if in order to bridge a gap in curiosity and understanding about the natural world.

Researchers sometimes conduct basic research with the hope of eventually developing a technology or drug based on that work. But what many scientists typically do in academia is ask fundamental questions with answers that may or may not ever lead to practical applications.

Humans, and the animal kingdom as a whole, are wired to be curious[6]. Basic research scratches that itch.

What are some basic science discoveries that went on to have a big influence on medicine?

The 2023 Nobel Prize in physiology or medicine[7] acknowledges basic science work done in the early 2000s. Karikó and Weissman’s discovery about modifying mRNA to reduce the body’s inflammatory response to it allowed other researchers to leverage it to make improved vaccines.

Another example is the discovery of antibiotics[8], which was based on an unexpected observation. In the late 1920s, the microbiologist Alexander Fleming was growing a species of bacteria in his lab and found that his Petri dish was accidentally contaminated with the fungus Penicillium notatum. He noticed that wherever the fungus was growing, it impeded or inhibited the growth of the bacteria. He wondered why that was happening and subsequently went on to isolate penicillin, which was approved for medical use in the early 1940s.

This work fed into more questions that ushered in the age of antibiotics. The 1952 Nobel Prize in physiology or medicine was awarded to Selman Waksman for his discovery of streptomycin[9], the first antibiotic to treat tuberculosis.

Penicillin was discovered by accident.

Basic research often involves seeing something surprising, wanting to understand why and deciding to investigate further. Early discoveries start from a basic observation, asking the simple question of “How?” Only later are they parlayed into a medical technology that helps humanity.

Why does it take so long to get from curiosity-driven basic science to a new product or technology?

The mRNA modification discovery could be considered to be on a relatively fast track from basic science to application. Less than 15 years passed between Karikó and Weissman’s findings and the COVID-19 vaccines. The importance of their discovery came to the forefront with the pandemic and the millions of lives[10] they saved.

Most basic research won’t reach the market until several decades[11] after its initial publication in a science journal. One reason is because it depends on need. For example, orphan diseases[12] that affect only a small number of people will get less attention and funding than conditions that are ubiquitous in a population, like cancer or diabetes. Companies don’t want to spend billions of dollars developing a drug that will only have a small return on their investment. Likewise, because the return on investment for basic research often isn’t clear, it can be a hard sell to support financially.

Another reason is cultural. Scientists are trained to chase after funding and support for their work wherever they can find it. But sometimes that’s not as easy as it seems.

A good example of this was when the human genome was first sequenced[13] in the early 2000s. A lot of people thought that having access to the full sequence would lead to treatments and cures for many different diseases. But that has not been the case[14], because there are many nuances to translating basic research to the clinic. What works in a cell or an animal might not translate into people. There are many steps and layers in the process to get there.

Why is basic science important?

For me, the most critical reason is that basic research is how we train and mentor future scientists[15].

In an academic setting, telling students “Let’s go develop an mRNA vaccine” versus “How does mRNA work in the body” influences how they approach science. How do they design experiments? Do they start the study going forward or backward? Are they argumentative or cautious in how they present their findings?

Close-up of scientist wearing nitrile gloves looking into microscope hovering over Petri dish
There are many steps between translating findings in a lab to the clinic. Marco VDM/E+ via Getty Images[16]

Almost every scientist is trained under a basic research umbrella of how to ask questions and go through the scientific method. You need to understand how, when and where mRNAs are modified before you can even begin to develop an mRNA vaccine. I believe the best way to inspire future scientists is to encourage them to expand on their curiosity in order to make a difference.

When I was writing my dissertation, I was relying on studies that were published in the late 1800s and early 1900s. Many of these studies are still cited in scientific articles today. When researchers share their work, though it may not be today or tomorrow, or 10 to 20 years from now, it will be of use to someone else in the future. You’ll make a future scientist’s job a little bit easier, and I believe that’s a great legacy to have.

What is a common misconception about basic science?

Because any immediate use for basic science can be very hard to see, it’s easy to think this kind of research is a waste of money or time[17]. Why are scientists breeding mosquitoes in these labs? Or why are researchers studying migratory birds? The same argument has been made with astronomy. Why are we spending billions of dollars putting things into space? Why are we looking to the edge of the universe and studying stars when they are millions and billions of light years away? How does it affect us?

There is a need for more scientific literacy[18] because not having it can make it difficult to understand why basic research is necessary to future breakthroughs that will have a major effect on society.

In the short term, the worth of basic research can be hard to see. But in the long term, history has shown that a lot of what we take for granted now, such as common medical equipment like X-rays[19], lasers[20] and MRIs[21], came from basic things people discovered in the lab.

And it still goes down to the fundamental questions – we’re a species that seeks answers to things we don’t know. As long as curiosity is a part of humanity, we’re always going to be seeking answers.

Read more

A person in a full body covering wearing glasses, a facemask and rubber gloves opens a small round door on a shiny steel piece of equipment in a laboratory

Twenty years ago, nanotechnology was the artificial intelligence of its time. The specific details of these technologies are, of course, a world apart. But the challenges of ensuring each technology’s responsible and beneficial development are surprisingly alike. Nanotechnology, which is technologies at the scale of individual atoms and molecules[1], even carried its own existential risk in the form of “gray goo[2].”

As potentially transformative AI-based technologies continue to emerge and gain traction, though, it is not clear that people in the artificial intelligence field are applying the lessons learned from nanotechnology.

As scholars of the future[3] of innovation[4], we explore these parallels in a new commentary in the journal Nature Nanotechnology[5]. The commentary also looks at how a lack of engagement with a diverse community of experts and stakeholders threatens AI’s long-term success.

Nanotech excitement and fear

In the late 1990s and early 2000s, nanotechnology transitioned from a radical and somewhat fringe idea to mainstream acceptance. The U.S. government and other administrations around the world ramped up investment in what was claimed to be “the next industrial revolution[6].” Government experts made compelling arguments for how, in the words of a foundational report from the U.S. National Science and Technology Council[7], “shaping the world atom by atom” would positively transform economies, the environment and lives.

But there was a problem. On the heels of public pushback against genetically modified crops[8], together with lessons learned from recombinant DNA[9] and the Human Genome Project[10], people in the nanotechnology field had growing concerns that there could be a similar backlash against nanotechnology if it were handled poorly.

A whiteboard primer on nanotechnology – and its responsible development.

These concerns were well grounded. In the early days of nanotechnology, nonprofit organizations such as the ETC Group[11], Friends of the Earth[12] and others strenuously objected to claims that this type of technology was safe, that there would be minimal downsides and that experts and developers knew what they were doing. The era saw public protests against nanotechnology[13] and – disturbingly – even a bombing campaign by environmental extremists that targeted nanotechnology researchers[14].

Just as with AI today, there were concerns about the effect on jobs[15] as a new wave of skills and automation swept away established career paths. Also foreshadowing current AI concerns, worries about existential risks began to emerge, notably the possibility of self-replicating “nanobots” converting all matter on Earth into copies of themselves, resulting in a planet-encompassing “gray goo.” This particular scenario was even highlighted by Sun Microsystems co-founder Bill Joy in a prominent article in Wired magazine[16].

Many of the potential risks associated with nanotechnology, though, were less speculative. Just as there’s a growing focus on more immediate risks associated with AI[17] in the present, the early 2000s saw an emphasis on examining tangible challenges related to ensuring the safe and responsible development of nanotechnology[18]. These included potential health and environmental impacts, social and ethical issues, regulation and governance, and a growing need for public and stakeholder collaboration.

The result was a profoundly complex landscape around nanotechnology development that promised incredible advances yet was rife with uncertainty and the risk of losing public trust if things went wrong.

How nanotech got it right

One of us – Andrew Maynard – was at the forefront of addressing the potential risks of nanotechnology in the early 2000s as a researcher, co-chair of the interagency Nanotechnology Environmental and Health Implications[19] working group and chief science adviser to the Woodrow Wilson International Center for Scholars Project on Emerging Technology[20].

At the time, working on responsible nanotechnology development felt like playing whack-a-mole with the health, environment, social and governance challenges presented by the technology. For every solution, there seemed to be a new problem.

Yet, through engaging with a wide array of experts and stakeholders – many of whom were not authorities on nanotechnology but who brought critical perspectives and insights to the table – the field produced initiatives that laid the foundation for nanotechnology to thrive. This included multistakeholder partnerships[21], consensus standards[22], and initiatives spearheaded by global bodies such as the Organization for Economic Cooperation and Development[23].

As a result, many of the technologies people rely on today are underpinned by advances in nanoscale science and engineering[24]. Even some of the advances in AI rely on nanotechnology-based hardware[25].

In the U.S., much of this collaborative work was spearheaded by the cross-agency National Nanotechnology Initiative[26]. In the early 2000s, the initiative brought together representatives from across the government to better understand the risks and benefits of nanotechnology. It helped convene a broad and diverse array of scholars, researchers, developers, practitioners, educators, activists, policymakers and other stakeholders to help map out strategies for ensuring socially and economically beneficial nanoscale technologies.

In 2003, the 21st Century Nanotechnology Research and Development Act[27] became law and further codified this commitment to participation by a broad array of stakeholders. The coming years saw a growing number of federally funded initiatives – including the Center for Nanotechnology and Society at Arizona State University (where one of us was on the board of visitors) – that cemented the principle of broad engagement around emerging advanced technologies.

Experts only at the table

These and similar efforts around the world were pivotal in ensuring the emergence of beneficial and responsible nanotechnology. Yet despite similar aspirations around AI, these same levels of diversity and engagement are missing. AI development practiced today is, by comparison, much more exclusionary. The White House has prioritized consultations with AI company CEOs[28], and Senate hearings[29] have drawn preferentially on technical experts[30].

According to lessons learned from nanotechnology, we believe this approach is a mistake. While members of the public, policymakers and experts outside the domain of AI may not fully understand the intimate details of the technology, they are often fully capable of understanding its implications. More importantly, they bring a diversity of expertise and perspectives to the table that is essential for the successful development of an advanced technology like AI.

This is why, in our Nature Nanotechnology commentary, we recommend learning from the lessons of nanotechnology[31], engaging early and often with experts and stakeholders who may not know the technical details and science behind AI but nevertheless bring knowledge and insights essential for ensuring the technology’s appropriate success.

UNESCO calls for broad participation in deciding AI’s future.

The clock is ticking

Artificial intelligence could be the most transformative technology that’s come along in living memory. Developed smartly, it could positively change the lives of billions of people. But this will happen only if society applies the lessons from past advanced technology transitions like the one driven by nanotechnology.

As with the formative years of nanotechnology, addressing the challenges of AI is urgent. The early days of an advanced technology transition set the trajectory for how it plays out over the coming decades. And with the recent pace of progress of AI, this window is closing fast.

It is not just the future of AI that’s at stake. Artificial intelligence is only one of many transformative emerging technologies. Quantum technologies[32], advanced genetic manipulation[33], neurotechnologies[34] and more are coming fast. If society doesn’t learn from the past to successfully navigate these imminent transitions, it risks losing out on the promises they hold and faces the possibility of each causing more harm than good.

Read more

Curious Kids[1] is a series for children of all ages. If you have a question you’d like an expert to answer, send it to This email address is being protected from spambots. You need JavaScript enabled to view it.[2]. How do we know the age of the planets and stars? – Swara D., age 13, Thane, India Measuring the ages of planets and stars helps scientists understand when they formed and how they change – and, in the case of planets, if life has had time to have evolved on them[3].Unfortunately, age is hard to measure for objects in space.Stars like the Sun maintain the same brightness, temperature and size for billions of years[4]. Planet properties like temperature[5] are often set by the star they orbit rather than their own age and evolution.Determining the age of a star or planet can be as hard as guessing the age of a person who looks exactly the same from childhood to retirement. Sussing out a star’s ageFortunately, stars change subtly[6] in brightness and color over time. With very accurate measurements, astronomers can compare these measurements of a star to mathematical models[7] that predict what happens to stars as they get older and estimate an age from there. Stars don’t just glow, they also spin. Over time, their spinning slows down[8], similar to how a spinning wheel slows down when it encounters friction. By comparing the spin speeds of stars of different ages, astronomers have been able to create mathematical relationships for the ages of stars[9], a method known as gyrochronology[10].
A close up image of the Sun in outer space
Researchers estimate the Sun is 4.58 billion years old. NASA via GettyImages[11] A star’s spin also generates a strong magnetic field and produces magnetic activity, such as stellar flares[12] – powerful bursts of energy and light that occur on stars’ surfaces. A steady decline in magnetic activity from a star can also help estimate its age.A more advanced method for determining the ages of stars is called asteroseismology[13], or star shaking. Astronomers study vibrations on the surfaces of stars caused by waves that travel through their interiors. Young stars have different vibrational patterns than old stars. By using this method, astronomers have estimated[14] the Sun to be 4.58 billion years old.Piecing together a planet’s ageIn the solar system, radionuclides[15] are the key to dating planets. These are special atoms that slowly release energy over a long period of time. As natural clocks, radionuclides help scientists determine the ages of all kinds of things, from rocks[16] to bones[17] and pottery[18].Using this method, scientists have determined that the oldest known meteorite is 4.57 billion years old[19], almost identical to the Sun’s asteroseismology measurement of 4.58 billion years. The oldest known rocks on Earth have slightly younger ages of 4.40 billion years[20]. Similarly, soil brought back from the Moon during the Apollo missions had radionuclide ages of up to 4.6 billion years[21].
A close up image of craters on the surface of the moon.
Craters on the moon’s surface. Tomekbudujedomek/Moment via Getty Images[22] Although studying radionuclides is a powerful method for measuring the ages of planets, it usually requires having a rock in hand. Typically, astronomers only have a picture of a planet to go by. Astronomers often determine the ages of rocky space objects like Mars or the Moon by counting their craters[23]. Older surfaces have more craters than younger surfaces. However, erosion from water, wind, cosmic rays[24] and lava flow from volcanoes can wipe away evidence of earlier impacts.Aging techniques don’t work for giant planets like Jupiter that have deeply buried surfaces. However, astronomers can estimate their ages by counting craters on their moons[25] or studying the distribution of certain classes of meteorites[26] scattered by them, which are consistent with radionuclide and cratering methods for rocky planets.We cannot yet directly measure the ages of planets outside our solar system with current technology.How accurate are these estimates?Our own solar system provides the best check for accuracy, since astronomers can compare the radionuclide ages of rocks on the Earth, Moon, or asteroids to the asteroseismology age of the Sun, and these match very well. Stars in clusters like the Pleiades[27] or Omega Centauri[28] are believed to have all formed at roughly the same time, so age estimates for individual stars in these clusters should be the same. In some stars, astronomers can detect[29] radionuclides like uranium – a heavy metal found in rocks and soil – in their atmospheres, which have been used to check the ages from other methods. Astronomers believe planets are roughly the same age as their host stars, so improving methods to determine a star’s age helps determine a planet’s age as well. By studying subtle clues, it’s possible to make an educated guess of the age of an otherwise steadfast star.Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to This email address is being protected from spambots. You need JavaScript enabled to view it.[30]. Please tell us your name, age and the city where you live.And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

Read more

The human brain can change[1] – but usually only slowly and with great effort, such as when learning a new sport or foreign language, or recovering from a stroke. Learning new skills correlates with changes in the brain[2], as evidenced by neuroscience research with animals[3] and functional brain scans in people. Presumably, if you master Calculus 1, something is now different in your brain. Furthermore, motor neurons in the brain expand and contract[4] depending on how often they are exercised – a neuronal reflection of “use it or lose it.”

People may wish their brains could change faster – not just when learning new skills, but also when overcoming problems like anxiety, depression and addictions.

Clinicians and scientists know there are times the brain can make rapid, enduring changes. Most often, these occur in the context of traumatic experiences[5], leaving an indelible imprint on the brain.

But positive experiences, which alter one’s life for the better, can occur equally as fast. Think of a spiritual awakening[6], a near-death experience[7] or a feeling of awe in nature[8].

a road splits in the woods, sun shines through green leafy trees
A transformative experience can be like a fork in the road, changing the path you are on. Westend61 via Getty Images[9]

Social scientists call events like these psychologically transformative experiences or pivotal mental states[10]. For the rest of us, they’re forks in the road. Presumably, these positive experiences[11] quickly change some “wiring” in the brain.

How do these rapid, positive transformations happen? It seems the brain has a way to facilitate accelerated change. And here’s where it gets really interesting: Psychedelic-assisted psychotherapy appears to tap into this natural neural mechanism.

Psychedelic-assisted psychotherapy

Those who’ve had a psychedelic experience usually describe it as a mental journey that’s impossible to put into words. However, it can be conceptualized as an altered state of consciousness[12] with distortions of perception, modified sense of self and rapidly changing emotions. Presumably there is a relaxation of the higher brain control, which allows deeper brain thoughts and feelings to emerge into conscious awareness.

Psychedelic-assisted psychotherapy[13] combines the psychology of talk therapy[14] with the power of a psychedelic experience. Researchers have described cases[15] in which subjects report profound, personally transformative experiences after one six-hour session with the psychedelic substance psilocybin, taken in conjunction with psychotherapy. For example, patients distressed about advancing cancer[16] have quickly experienced relief and an unexpected acceptance of the approaching end. How does this happen?

glowing green tendrils of a neuron against a black background
Neuronal spines are the little bumps along the spreading branches of a neuron. Patrick Pla via Wikimedia Commons[17], CC BY-SA[18]

Research suggests that new skills, memories[19] and attitudes are encoded in the brain by new connections between neurons – sort of like branches of trees growing toward each other. Neuroscientists even call the pattern of growth arborization[20].

Researchers using a technique called two-photon microscopy[21] can observe this process in living cells by following the formation and regression of spines on the neurons. The spines are one half of the synapses that allow for communication between one neuron and another.

Scientists have thought that enduring spine formation could be established only with focused, repetitive mental energy. However, a lab at Yale recently documented rapid spine formation in the frontal cortex of mice[22] after one dose of psilocybin. Researchers found that mice given the mushroom-derived drug had about a 10% increase in spine formation. These changes had occurred when examined one day after treatment and endured for over a month.

diagram of little bumps along a neuron, enlarged at different scales
Tiny spines along a neuron’s branches are a crucial part of how one neuron receives a message from another. Edmund S. Higgins

A mechanism for psychedelic-induced change

Psychoactive molecules primarily change brain function through the receptors on the neural cells. The serotonin receptor 5HT, the one famously tweaked by antidepressants[23], comes in a variety of subtypes. Psychedelics such as DMT, the active chemical in the plant-based psychedelic ayahuasca[24], stimulate a receptor cell type[25], called 5-HT2A. This receptor also appears to mediate the hyperplastic states[26] when a brain is changing quickly.

These 5-HT2A receptors that DMT activates are not only on the neuron cell surface but also inside the neuron. It’s only the 5-HT2A receptor inside the cell that facilitates rapid change in neuronal structure. Serotonin can’t get through the cell membrane[27], which is why people don’t hallucinate when taking antidepressants like Prozac or Zoloft. The psychedelics, on the other hand, slip through the cell’s exterior and tweak the 5-HT2A receptor, stimulating dendritic growth and increased spine formation.

Here’s where this story all comes together. In addition to being the active ingredient in ayahuasca, DMT is an endogenous molecule[28] synthesized naturally in mammalian brains. As such, human neurons are capable of producing their own “psychedelic” molecule, although likely in tiny quantities. It’s possible the brain uses its own endogenous DMT as a tool for change – as when forming dendritic spines on neurons – to encode pivotal mental states. And it’s possible psychedelic-assisted psychotherapy uses this naturally occurring neural mechanism to facilitate healing.

A word of caution

In her essay collection “These Precious Days[29],” author Ann Patchett describes taking mushrooms with a friend who was struggling with pancreatic cancer[30]. The friend had a mystical experience and came away feeling deeper connections to her family and friends. Patchett, on the other hand, said she spent eight hours “hacking up snakes in some pitch-black cauldron of lava at the center of the Earth.” It felt like death to her.

Psychedelics are powerful, and none of the classic psychedelic drugs, such as LSD, are approved yet for treatment. The U.S. Food and Drug Administration in 2019 did approve ketamine[31], in conjunction with an antidepressant, to treat depression in adults. Psychedelic-assisted psychotherapy with MDMA (often called ecstasy or molly) for PTSD[32] and psilocybin for depression[33] are in Phase 3 trials.

Read more

More Articles …