The Power of Truth® has been released for sale and assignment to a conservative pro-American news outlet, cable network, or other media outlet that wants to define and brand its operation as the bearer of the truth, and set itself above the competition.

In every news story the audience hears of censorship, speech, and the truth. The Power of Truth® has significant value to define an outlet, and expand its audience. A growing media outlet may decide to rebrand their operation The Power of Truth®. An established outlet may choose to make it the slogan distinguishing their operation from the competition. You want people to think of your outlet when they hear it, and think of the slogan when they see your company name. It is the thing which answers the consumer's questions: Why should I choose you? Why should I listen to you? Think:

  • What’s in your wallet -- Capital One
  • The most trusted name in news – CNN
  • Fair and balanced - Fox News
  • Where’s the beef -- Wendy’s
  • You’re in good hands -- Allstate
  • The ultimate driving machine -- BMW

The Power of Truth® is registered at the federal trademark level in all applicable trademark classes, and the sale and assignment includes the applicable domain names. The buyer will have both the trademark and the domains so that it will control its business landscape without downrange interference.

Contact: Truth@ThePowerOfTruth.com

A person in a full body covering wearing glasses, a facemask and rubber gloves opens a small round door on a shiny steel piece of equipment in a laboratory

Twenty years ago, nanotechnology was the artificial intelligence of its time. The specific details of these technologies are, of course, a world apart. But the challenges of ensuring each technology’s responsible and beneficial development are surprisingly alike. Nanotechnology, which is technologies at the scale of individual atoms and molecules[1], even carried its own existential risk in the form of “gray goo[2].”

As potentially transformative AI-based technologies continue to emerge and gain traction, though, it is not clear that people in the artificial intelligence field are applying the lessons learned from nanotechnology.

As scholars of the future[3] of innovation[4], we explore these parallels in a new commentary in the journal Nature Nanotechnology[5]. The commentary also looks at how a lack of engagement with a diverse community of experts and stakeholders threatens AI’s long-term success.

Nanotech excitement and fear

In the late 1990s and early 2000s, nanotechnology transitioned from a radical and somewhat fringe idea to mainstream acceptance. The U.S. government and other administrations around the world ramped up investment in what was claimed to be “the next industrial revolution[6].” Government experts made compelling arguments for how, in the words of a foundational report from the U.S. National Science and Technology Council[7], “shaping the world atom by atom” would positively transform economies, the environment and lives.

But there was a problem. On the heels of public pushback against genetically modified crops[8], together with lessons learned from recombinant DNA[9] and the Human Genome Project[10], people in the nanotechnology field had growing concerns that there could be a similar backlash against nanotechnology if it were handled poorly.

A whiteboard primer on nanotechnology – and its responsible development.

These concerns were well grounded. In the early days of nanotechnology, nonprofit organizations such as the ETC Group[11], Friends of the Earth[12] and others strenuously objected to claims that this type of technology was safe, that there would be minimal downsides and that experts and developers knew what they were doing. The era saw public protests against nanotechnology[13] and – disturbingly – even a bombing campaign by environmental extremists that targeted nanotechnology researchers[14].

Just as with AI today, there were concerns about the effect on jobs[15] as a new wave of skills and automation swept away established career paths. Also foreshadowing current AI concerns, worries about existential risks began to emerge, notably the possibility of self-replicating “nanobots” converting all matter on Earth into copies of themselves, resulting in a planet-encompassing “gray goo.” This particular scenario was even highlighted by Sun Microsystems co-founder Bill Joy in a prominent article in Wired magazine[16].

Many of the potential risks associated with nanotechnology, though, were less speculative. Just as there’s a growing focus on more immediate risks associated with AI[17] in the present, the early 2000s saw an emphasis on examining tangible challenges related to ensuring the safe and responsible development of nanotechnology[18]. These included potential health and environmental impacts, social and ethical issues, regulation and governance, and a growing need for public and stakeholder collaboration.

The result was a profoundly complex landscape around nanotechnology development that promised incredible advances yet was rife with uncertainty and the risk of losing public trust if things went wrong.

How nanotech got it right

One of us – Andrew Maynard – was at the forefront of addressing the potential risks of nanotechnology in the early 2000s as a researcher, co-chair of the interagency Nanotechnology Environmental and Health Implications[19] working group and chief science adviser to the Woodrow Wilson International Center for Scholars Project on Emerging Technology[20].

At the time, working on responsible nanotechnology development felt like playing whack-a-mole with the health, environment, social and governance challenges presented by the technology. For every solution, there seemed to be a new problem.

Yet, through engaging with a wide array of experts and stakeholders – many of whom were not authorities on nanotechnology but who brought critical perspectives and insights to the table – the field produced initiatives that laid the foundation for nanotechnology to thrive. This included multistakeholder partnerships[21], consensus standards[22], and initiatives spearheaded by global bodies such as the Organization for Economic Cooperation and Development[23].

As a result, many of the technologies people rely on today are underpinned by advances in nanoscale science and engineering[24]. Even some of the advances in AI rely on nanotechnology-based hardware[25].

In the U.S., much of this collaborative work was spearheaded by the cross-agency National Nanotechnology Initiative[26]. In the early 2000s, the initiative brought together representatives from across the government to better understand the risks and benefits of nanotechnology. It helped convene a broad and diverse array of scholars, researchers, developers, practitioners, educators, activists, policymakers and other stakeholders to help map out strategies for ensuring socially and economically beneficial nanoscale technologies.

In 2003, the 21st Century Nanotechnology Research and Development Act[27] became law and further codified this commitment to participation by a broad array of stakeholders. The coming years saw a growing number of federally funded initiatives – including the Center for Nanotechnology and Society at Arizona State University (where one of us was on the board of visitors) – that cemented the principle of broad engagement around emerging advanced technologies.

Experts only at the table

These and similar efforts around the world were pivotal in ensuring the emergence of beneficial and responsible nanotechnology. Yet despite similar aspirations around AI, these same levels of diversity and engagement are missing. AI development practiced today is, by comparison, much more exclusionary. The White House has prioritized consultations with AI company CEOs[28], and Senate hearings[29] have drawn preferentially on technical experts[30].

According to lessons learned from nanotechnology, we believe this approach is a mistake. While members of the public, policymakers and experts outside the domain of AI may not fully understand the intimate details of the technology, they are often fully capable of understanding its implications. More importantly, they bring a diversity of expertise and perspectives to the table that is essential for the successful development of an advanced technology like AI.

This is why, in our Nature Nanotechnology commentary, we recommend learning from the lessons of nanotechnology[31], engaging early and often with experts and stakeholders who may not know the technical details and science behind AI but nevertheless bring knowledge and insights essential for ensuring the technology’s appropriate success.

UNESCO calls for broad participation in deciding AI’s future.

The clock is ticking

Artificial intelligence could be the most transformative technology that’s come along in living memory. Developed smartly, it could positively change the lives of billions of people. But this will happen only if society applies the lessons from past advanced technology transitions like the one driven by nanotechnology.

As with the formative years of nanotechnology, addressing the challenges of AI is urgent. The early days of an advanced technology transition set the trajectory for how it plays out over the coming decades. And with the recent pace of progress of AI, this window is closing fast.

It is not just the future of AI that’s at stake. Artificial intelligence is only one of many transformative emerging technologies. Quantum technologies[32], advanced genetic manipulation[33], neurotechnologies[34] and more are coming fast. If society doesn’t learn from the past to successfully navigate these imminent transitions, it risks losing out on the promises they hold and faces the possibility of each causing more harm than good.

Read more

Curious Kids[1] is a series for children of all ages. If you have a question you’d like an expert to answer, send it to This email address is being protected from spambots. You need JavaScript enabled to view it.[2]. How do we know the age of the planets and stars? – Swara D., age 13, Thane, India Measuring the ages of planets and stars helps scientists understand when they formed and how they change – and, in the case of planets, if life has had time to have evolved on them[3].Unfortunately, age is hard to measure for objects in space.Stars like the Sun maintain the same brightness, temperature and size for billions of years[4]. Planet properties like temperature[5] are often set by the star they orbit rather than their own age and evolution.Determining the age of a star or planet can be as hard as guessing the age of a person who looks exactly the same from childhood to retirement. Sussing out a star’s ageFortunately, stars change subtly[6] in brightness and color over time. With very accurate measurements, astronomers can compare these measurements of a star to mathematical models[7] that predict what happens to stars as they get older and estimate an age from there. Stars don’t just glow, they also spin. Over time, their spinning slows down[8], similar to how a spinning wheel slows down when it encounters friction. By comparing the spin speeds of stars of different ages, astronomers have been able to create mathematical relationships for the ages of stars[9], a method known as gyrochronology[10].
A close up image of the Sun in outer space
Researchers estimate the Sun is 4.58 billion years old. NASA via GettyImages[11] A star’s spin also generates a strong magnetic field and produces magnetic activity, such as stellar flares[12] – powerful bursts of energy and light that occur on stars’ surfaces. A steady decline in magnetic activity from a star can also help estimate its age.A more advanced method for determining the ages of stars is called asteroseismology[13], or star shaking. Astronomers study vibrations on the surfaces of stars caused by waves that travel through their interiors. Young stars have different vibrational patterns than old stars. By using this method, astronomers have estimated[14] the Sun to be 4.58 billion years old.Piecing together a planet’s ageIn the solar system, radionuclides[15] are the key to dating planets. These are special atoms that slowly release energy over a long period of time. As natural clocks, radionuclides help scientists determine the ages of all kinds of things, from rocks[16] to bones[17] and pottery[18].Using this method, scientists have determined that the oldest known meteorite is 4.57 billion years old[19], almost identical to the Sun’s asteroseismology measurement of 4.58 billion years. The oldest known rocks on Earth have slightly younger ages of 4.40 billion years[20]. Similarly, soil brought back from the Moon during the Apollo missions had radionuclide ages of up to 4.6 billion years[21].
A close up image of craters on the surface of the moon.
Craters on the moon’s surface. Tomekbudujedomek/Moment via Getty Images[22] Although studying radionuclides is a powerful method for measuring the ages of planets, it usually requires having a rock in hand. Typically, astronomers only have a picture of a planet to go by. Astronomers often determine the ages of rocky space objects like Mars or the Moon by counting their craters[23]. Older surfaces have more craters than younger surfaces. However, erosion from water, wind, cosmic rays[24] and lava flow from volcanoes can wipe away evidence of earlier impacts.Aging techniques don’t work for giant planets like Jupiter that have deeply buried surfaces. However, astronomers can estimate their ages by counting craters on their moons[25] or studying the distribution of certain classes of meteorites[26] scattered by them, which are consistent with radionuclide and cratering methods for rocky planets.We cannot yet directly measure the ages of planets outside our solar system with current technology.How accurate are these estimates?Our own solar system provides the best check for accuracy, since astronomers can compare the radionuclide ages of rocks on the Earth, Moon, or asteroids to the asteroseismology age of the Sun, and these match very well. Stars in clusters like the Pleiades[27] or Omega Centauri[28] are believed to have all formed at roughly the same time, so age estimates for individual stars in these clusters should be the same. In some stars, astronomers can detect[29] radionuclides like uranium – a heavy metal found in rocks and soil – in their atmospheres, which have been used to check the ages from other methods. Astronomers believe planets are roughly the same age as their host stars, so improving methods to determine a star’s age helps determine a planet’s age as well. By studying subtle clues, it’s possible to make an educated guess of the age of an otherwise steadfast star.Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to This email address is being protected from spambots. You need JavaScript enabled to view it.[30]. Please tell us your name, age and the city where you live.And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

Read more

The human brain can change[1] – but usually only slowly and with great effort, such as when learning a new sport or foreign language, or recovering from a stroke. Learning new skills correlates with changes in the brain[2], as evidenced by neuroscience research with animals[3] and functional brain scans in people. Presumably, if you master Calculus 1, something is now different in your brain. Furthermore, motor neurons in the brain expand and contract[4] depending on how often they are exercised – a neuronal reflection of “use it or lose it.”

People may wish their brains could change faster – not just when learning new skills, but also when overcoming problems like anxiety, depression and addictions.

Clinicians and scientists know there are times the brain can make rapid, enduring changes. Most often, these occur in the context of traumatic experiences[5], leaving an indelible imprint on the brain.

But positive experiences, which alter one’s life for the better, can occur equally as fast. Think of a spiritual awakening[6], a near-death experience[7] or a feeling of awe in nature[8].

a road splits in the woods, sun shines through green leafy trees
A transformative experience can be like a fork in the road, changing the path you are on. Westend61 via Getty Images[9]

Social scientists call events like these psychologically transformative experiences or pivotal mental states[10]. For the rest of us, they’re forks in the road. Presumably, these positive experiences[11] quickly change some “wiring” in the brain.

How do these rapid, positive transformations happen? It seems the brain has a way to facilitate accelerated change. And here’s where it gets really interesting: Psychedelic-assisted psychotherapy appears to tap into this natural neural mechanism.

Psychedelic-assisted psychotherapy

Those who’ve had a psychedelic experience usually describe it as a mental journey that’s impossible to put into words. However, it can be conceptualized as an altered state of consciousness[12] with distortions of perception, modified sense of self and rapidly changing emotions. Presumably there is a relaxation of the higher brain control, which allows deeper brain thoughts and feelings to emerge into conscious awareness.

Psychedelic-assisted psychotherapy[13] combines the psychology of talk therapy[14] with the power of a psychedelic experience. Researchers have described cases[15] in which subjects report profound, personally transformative experiences after one six-hour session with the psychedelic substance psilocybin, taken in conjunction with psychotherapy. For example, patients distressed about advancing cancer[16] have quickly experienced relief and an unexpected acceptance of the approaching end. How does this happen?

glowing green tendrils of a neuron against a black background
Neuronal spines are the little bumps along the spreading branches of a neuron. Patrick Pla via Wikimedia Commons[17], CC BY-SA[18]

Research suggests that new skills, memories[19] and attitudes are encoded in the brain by new connections between neurons – sort of like branches of trees growing toward each other. Neuroscientists even call the pattern of growth arborization[20].

Researchers using a technique called two-photon microscopy[21] can observe this process in living cells by following the formation and regression of spines on the neurons. The spines are one half of the synapses that allow for communication between one neuron and another.

Scientists have thought that enduring spine formation could be established only with focused, repetitive mental energy. However, a lab at Yale recently documented rapid spine formation in the frontal cortex of mice[22] after one dose of psilocybin. Researchers found that mice given the mushroom-derived drug had about a 10% increase in spine formation. These changes had occurred when examined one day after treatment and endured for over a month.

diagram of little bumps along a neuron, enlarged at different scales
Tiny spines along a neuron’s branches are a crucial part of how one neuron receives a message from another. Edmund S. Higgins

A mechanism for psychedelic-induced change

Psychoactive molecules primarily change brain function through the receptors on the neural cells. The serotonin receptor 5HT, the one famously tweaked by antidepressants[23], comes in a variety of subtypes. Psychedelics such as DMT, the active chemical in the plant-based psychedelic ayahuasca[24], stimulate a receptor cell type[25], called 5-HT2A. This receptor also appears to mediate the hyperplastic states[26] when a brain is changing quickly.

These 5-HT2A receptors that DMT activates are not only on the neuron cell surface but also inside the neuron. It’s only the 5-HT2A receptor inside the cell that facilitates rapid change in neuronal structure. Serotonin can’t get through the cell membrane[27], which is why people don’t hallucinate when taking antidepressants like Prozac or Zoloft. The psychedelics, on the other hand, slip through the cell’s exterior and tweak the 5-HT2A receptor, stimulating dendritic growth and increased spine formation.

Here’s where this story all comes together. In addition to being the active ingredient in ayahuasca, DMT is an endogenous molecule[28] synthesized naturally in mammalian brains. As such, human neurons are capable of producing their own “psychedelic” molecule, although likely in tiny quantities. It’s possible the brain uses its own endogenous DMT as a tool for change – as when forming dendritic spines on neurons – to encode pivotal mental states. And it’s possible psychedelic-assisted psychotherapy uses this naturally occurring neural mechanism to facilitate healing.

A word of caution

In her essay collection “These Precious Days[29],” author Ann Patchett describes taking mushrooms with a friend who was struggling with pancreatic cancer[30]. The friend had a mystical experience and came away feeling deeper connections to her family and friends. Patchett, on the other hand, said she spent eight hours “hacking up snakes in some pitch-black cauldron of lava at the center of the Earth.” It felt like death to her.

Psychedelics are powerful, and none of the classic psychedelic drugs, such as LSD, are approved yet for treatment. The U.S. Food and Drug Administration in 2019 did approve ketamine[31], in conjunction with an antidepressant, to treat depression in adults. Psychedelic-assisted psychotherapy with MDMA (often called ecstasy or molly) for PTSD[32] and psilocybin for depression[33] are in Phase 3 trials.

Read more

an exterior wall topped by streetlights and bearing writing that says

Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence.

Countries trying to influence each other’s elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the U.S. presidential election. Over the next seven years, a number of countries – most prominently China and Iran – used social media to influence foreign elections, both in the U.S. and elsewhere in the world. There’s no reason to expect 2023 and 2024 to be any different.

But there is a new element: generative AI and large language models. These have the ability to quickly and easily produce endless reams of text on any topic in any tone from any perspective. As a security expert[1], I believe it’s a tool uniquely suited to internet-era propaganda.

This is all very new. ChatGPT was introduced in November 2022. The more powerful GPT-4 was released in March 2023. Other language and image production AIs are around the same age. It’s not clear how these technologies will change disinformation, how effective they will be or what effects they will have. But we are about to find out.

A conjunction of elections

Election season will soon be in full swing[2] in much of the democratic world. Seventy-one percent of people living in democracies will vote in a national election between now and the end of next year. Among them: Argentina and Poland in October, Taiwan in January, Indonesia in February, India in April, the European Union and Mexico in June and the U.S. in November. Nine African democracies, including South Africa, will have elections in 2024. Australia and the U.K. don’t have fixed dates, but elections are likely to occur in 2024.

Many of those elections matter a lot to the countries that have run social media influence operations in the past. China cares a great deal about Taiwan[3], Indonesia[4], India[5] and many African countries[6]. Russia cares about the U.K., Poland, Germany and the EU in general[7]. Everyone cares about the United States.

AI image, text and video generators are already beginning to inject disinformation into elections.

And that’s only considering the largest players. Every U.S. national election from 2016 has brought with it an additional country attempting to influence the outcome. First it was just Russia, then Russia and China, and most recently those two plus Iran. As the financial cost of foreign influence decreases, more countries can get in on the action. Tools like ChatGPT significantly reduce the price of producing and distributing propaganda, bringing that capability within the budget of many more countries.

Election interference

A couple of months ago, I attended a conference with representatives from all of the cybersecurity agencies in the U.S. They talked about their expectations regarding election interference in 2024. They expected the usual players – Russia, China and Iran – and a significant new one: “domestic actors.” That is a direct result of this reduced cost.

Of course, there’s a lot more to running a disinformation campaign than generating content. The hard part is distribution. A propagandist needs a series of fake accounts on which to post, and others to boost it into the mainstream where it can go viral. Companies like Meta have gotten much better at identifying these accounts and taking them down. Just last month, Meta announced[8] that it had removed 7,704 Facebook accounts, 954 Facebook pages, 15 Facebook groups and 15 Instagram accounts associated with a Chinese influence campaign, and identified hundreds more accounts on TikTok, X (formerly Twitter), LiveJournal and Blogspot. But that was a campaign that began four years ago, producing pre-AI disinformation.

Russia has a long history of engaging in foreign disinformation campaigns.

Disinformation is an arms race. Both the attackers and defenders have improved, but also the world of social media is different. Four years ago, Twitter was a direct line to the media, and propaganda on that platform was a way to tilt the political narrative. A Columbia Journalism Review study found that most major news outlets used Russian tweets[9] as sources for partisan opinion. That Twitter, with virtually every news editor reading it and everyone who was anyone posting there, is no more.

Many propaganda outlets moved from Facebook to messaging platforms such as Telegram and WhatsApp, which makes them harder to identify and remove. TikTok is a newer platform that is controlled by China and more suitable for short, provocative videos – ones that AI makes much easier to produce. And the current crop of generative AIs are being connected to tools[10] that will make content distribution easier as well.

Generative AI tools also allow for new techniques of production and distribution, such as low-level propaganda at scale. Imagine a new AI-powered personal account on social media. For the most part, it behaves normally. It posts about its fake everyday life, joins interest groups and comments on others’ posts, and generally behaves like a normal user. And once in a while, not very often, it says – or amplifies – something political. These persona bots[11], as computer scientist Latanya Sweeney calls them, have negligible influence on their own. But replicated by the thousands or millions, they would have a lot more.

Disinformation on AI steroids

That’s just one scenario. The military officers in Russia, China and elsewhere in charge of election interference are likely to have their best people thinking of others. And their tactics are likely to be much more sophisticated than they were in 2016.

Countries like Russia and China have a history of testing both cyberattacks and information operations on smaller countries before rolling them out at scale. When that happens, it’s important to be able to fingerprint these tactics. Countering new disinformation campaigns requires being able to recognize them, and recognizing them requires looking for and cataloging them now.

Even before the rise of generative AI, Russian disinformation campaigns have made sophisticated use of social media.

In the computer security world, researchers recognize that sharing methods of attack and their effectiveness is the only way to build strong defensive systems. The same kind of thinking also applies to these information campaigns: The more that researchers study what techniques are being employed in distant countries, the better they can defend their own countries.

Disinformation campaigns in the AI era are likely to be much more sophisticated than they were in 2016. I believe the U.S. needs to have efforts in place to fingerprint and identify AI-produced propaganda in Taiwan[12], where a presidential candidate claims a deepfake audio recording has defamed him, and other places. Otherwise, we’re not going to see them when they arrive here. Unfortunately, researchers are instead being targeted[13] and harassed[14].

Maybe this will all turn out OK. There have been some important democratic elections in the generative AI era with no significant disinformation issues: primaries in Argentina, first-round elections in Ecuador and national elections in Thailand, Turkey, Spain and Greece. But the sooner we know what to expect, the better we can deal with what comes.

Read more

Parents send their children to school to learn, and they don’t want to worry about whether the air is clean, whether there are insect problems or whether the school’s cleaning supplies could cause an asthma attack.

But a research collaborative, of which I’m a member, has found that schools might not be ready[1] to protect students from environmental contaminants.

I’m an extension specialist[2] focused on pest management. I’m working with a cross-disciplinary team to improve compliance with environmental health standards, and we’ve found that schools across the nation need updates[3] in order to meet minimum code requirements.

Everything from a school’s air and water quality to the safety of the pesticides and cleaning chemicals used there determine the safety of the learning environment. Environmental health standards can help a school community ensure each potential hazard is accounted for.

Air, water and food quality

So, what aspects of the school environment and student health need attention? For one, the air students and teachers breathe every day.

Understanding and controlling common pollutants indoors[4] can improve the indoor air quality and reduce the risk of health concerns[5]. Even small things like dust and dander, dead insects[6] and artificial scents[7] used to cover up smells like mold and mildew can trigger asthma[8] and allergies.

Improving ventilation, as well as a school’s air flow and filtration, can help protect building occupants from respiratory infections and maintain a healthy indoor environment. Ventilation systems[9] bring fresh, outdoor air into rooms, filter or disinfect the air in the room and improve how often air flows in and out of a room.

Upgrading ventilation in school buildings can improve air quality and reduce potential contaminants, including viral particles, in indoor spaces.

A white ceiling with a flourescent light and a large, rectangular vent.
Proper ventilation in schools can reduce pathogen spread and common allergy triggers. Penpak Ngamsathain/Moment[10]

It may seem like maintaining proper food safety and drinking-water quality would be common practices. But many schools do have some level of lead contamination[11] in their food and water.

In 1991, the U.S. Environmental Protection Agency published a regulation, known as the lead and copper rule[12], to minimize lead and copper in drinking water. The EPA’s 2021 revised lead and copper rule[13] aims to reduce the risks of childhood lead exposure by focusing on schools and child care facilities and conducting outreach.

But in December 2022, a team of scientists published a report on lead and copper levels in drinking water[14], and they found evidence that lead is still showing up in drinking water in Massachusetts schools. No amount of lead[15] is safe to have in the water.

To combat contamination and ensure safe food and water, the Food and Drug Administration overhauled the Food Safety Modernization Act[16] in 2016. This act has transformed the nation’s food safety system by shifting the focus from responding to foodborne illnesses to preventing them. It gives local health officials more authority to oversee and enforce supply chain safety.

Per these new regulations[17], every school cafeteria must be inspected by the local registered sanitarian at least twice a year to meet the minimum standards for their state and federal guidelines.

These inspections now include looking for entry points that might allow mice or rats to come in, finding areas with moisture buildup where flies, roaches or other insects can breed, and determining whether storage rooms are properly sanitized.

Integrated pest management

Even if a school has clean air, water and food, it still may not meet all the required health standards[18]. Many schools have insect infestations, and many combat these pest problems with harsh chemicals when there’s a simpler solution.

Integrated pest management[19] is an environmentally sensitive approach[20] to pest management. Known as IPM, it combines commonsense practices like keeping doors and windows closed and making sure no food is left in classrooms overnight with other ways to help prevent pests from coming in.

IPM programs consider the pests’ life cycles and their larger environment, as well as all the available pest control methods, to manage pest infestations economically and scientifically.

Common pests in schools[21] include ants, cockroaches and bedbugs. Ants enter looking for food, and cockroaches can travel in with backpacks or enter through small openings under doors or cracks in the seals around a window. Mice, cockroaches and ants can come into a kitchen or bathroom from plumbing pipes that aren’t properly sealed.

A cockroach standing on a white door trim facing downwards.
Cockroaches can lurk in custodial closets and near drains at schools. Narakhon Somsavangwong/iStock[22]

In the fall, cockroaches reside in custodial closets, kitchens and other areas where floor drains might be. These bugs use the sewer drains to move about, so an IPM approach might include making sure the drains have plenty of water flooding through them and clearing out organic matter that the cockroaches might feed on.

Green cleaning

School administrators also determine what products to use for pest control and cleaning. With the intent to prioritize the safety of both the people inside the building and the environment, some schools have adopted a “green cleaning[23]” approach.

Green cleaning[24] uses safer – or less harsh – chemical and pesticide products, since studies have found[25] that the repeated use of harsh chemicals indoors can lead to chronic health effects later in life for anyone directly exposed.

Products that contain ingredients like hydrogen peroxide[26], citric acid[27] and isopropyl alcohol[28] are generally safer than products that contain chlorine[29] or ammonia[30].

But the school’s job isn’t done, even after the infestation has been dealt with. Schools need a plan to manage their pollutants long term – these pollutants might be cleaning chemicals and pesticides or chemicals used in science classes. Preserving the school’s air quality requires a plan for storage and disposal of these materials. But finding the funds to correctly dispose of legacy chemicals can challenge already thin budgets.

Over the past decade, the U.S. Centers for Disease Control and Prevention has worked with a variety of groups to develop the Whole School, Whole Community, Whole Child[31] initiative. This approach pulls together professionals, community leaders, parents and others to support evidence-based policies and practices.

The initiative has also led some states to develop school health advisory councils[32] that work with state departments of education and health to assist their local school districts with managing the indoor environment and student health.

When the school building is safe, students and educators are more able to get down to the business of learning, undistracted.

Read more

I’ve been primarily an experimental chemist – the kind of person who goes into the laboratory and mixes and stirs chemicals – since the beginning of my career in 1965. Today, and for the past 15 years, I’m a full-time historian of chemistry[1].

Every October, when the announcements are made of that year’s Nobel laureates[2], I examine the results as a chemist. And all too often, I share the same response as many of my fellow chemists: “Who are they? And what did they do?”

One reason for that bewilderment – and disappointment – is that in many recent years, none of my “favorites” or those of my fellow chemists will travel to Stockholm. I am not suggesting that these Nobel laureates[3] are undeserving – quite the opposite. Rather, I am questioning whether some of these awards belong within the discipline of chemistry.

Consider some recent Nobel Prizes. In 2020, Emmanuelle Charpentier and Jennifer A. Doudna received the Nobel Prize “for the development of a method for genome editing[4].” In 2018, Frances H. Arnold received the Nobel Prize “for the directed evolution of enzymes[5],” which she shared with George P. Smith and Sir Gregory P. Winter “for the phage display of peptides and antibodies[6].” In 2015, Tomas Lindahl, Paul Modrich and Aziz Sancar received the Nobel Prize “for mechanistic studies of DNA repair[7].”

All of them received Nobel Prizes in chemistry – not the Nobel Prize in physiology or medicine[8], even though these achievements seem very clearly situated within the disciplines of medicine and the life sciences. There are many other similar examples.

woman and man in formal dress at awards ceremony
2018 co-laureate Frances Arnold receives her Nobel Prize in chemistry from King Carl XVI Gustaf of Sweden. Henrik Montgomery/AFP via Getty Images[9]

These recent mismatches are even clearer when you look further back in time. Consider the 1962 Nobel Prize awarded to Francis Crick, James Watson and Maurice Wilkins “for their discoveries concerning the molecular structure of nucleic acids[10] and its significance for information transfer in living material.” DNA[11], of course, is the most famous nucleic acid, and these three scientists were honored for deciphering how its atoms are bonded together and arranged in their three-dimensional double-helix shape.

While the “structure of DNA” most certainly is an achievement in chemistry, the Nobel Assembly at the Karolinska Institute in Stockholm awarded the Nobel Prize in physiology or medicine to Watson, Crick and Wilkins. Clearly, their Nobel achievements have had great consequences in the life sciences, genetics and medicine. Thus awarding them the Nobel Prize for physiology or medicine is quite appropriate.

metal model of structure of DNA molecule double helix
A model of a DNA molecule using some of Watson and Crick’s original metal plates. Science & Society Picture Library via Getty Images[12]

But note the disconnect. The Nobel Prizes in chemistry in 2020, 2018 and 2015 are more life-science- and medicine-oriented than Watson, Crick and Wilkins’ for the structure of DNA. Yet the former were awarded in chemistry, while the latter was in physiology and medicine.

What is going on? What does this trend reveal about the Nobel Foundation and its award strategies in response to the growth of science?

A gradual evolution in the Nobel Prizes

Several years ago, chemist-historian-applied mathematician Guillermo Restrepo[13] and I collaborated to study the relationship of scientific discipline to the Nobel Prize.

Each year, the Nobel Committee for chemistry studies the nominations[14] and proposes the recipients[15] of the Nobel Prize in chemistry to its parent organization, the Royal Swedish Academy of Sciences, which ultimately selects the Nobel laureates in chemistry (and physics).

We found a strong correlation between the disciplines of the members of the committee and the disciplines of the awardees themselves. Over the lifetime of the Nobel Prizes, there has been a continuous increase – from about 10% in the 1910s to 50% into the 2000s – in the percentage of committee members whose research is best identified within the life sciences.

Restrepo and I concluded[16]: As go the expertise, interests and the disciplines of the committee members, so go the disciplines honored by the Nobel Prizes in chemistry. We also concluded that the academy has intentionally included more and more life scientists on their selection committee for chemistry.

Now some perceptive readers might ask, “Is not the discipline of biochemistry just a subdiscipline of chemistry?” The underlying question is, “How does one define the disciplines in science?”

Restrepo and I reasoned that what we term “intellectual territory” defines the boundaries of a discipline. Intellectual territory can be assessed by bibliographic analysis of the scientific literature. We examined the references, often called citations, that are found in scientific publications. These references are where authors of journal articles cite the related research that’s previously been published – often the research they have relied and built on. We chose to study two journals[17]: a chemistry journal named Angewandte Chemie and a life science journal named, rather aptly, Biochemistry.

We found that the articles in Angewandte Chemie mostly cite articles published in other chemistry journals, and the articles in Biochemistry mostly cite articles in biochemistry and life sciences journals. We also found that the reverse is true: Scientific publications that cite Angewandte Chemie articles are mostly in chemistry journals, and publications that cite Biochemistry articles are mostly in biochemistry and life science journals. In other words, chemistry and the life sciences/biochemistry reside in vastly different intellectual territories that don’t tend to overlap much.

Not letting labels be limiting

But now, perhaps a shocker. Many scientists don’t really care how they are classified by others. Scientists care about science.

As I’ve heard Dudley Herschbach, recipient of the 1986 Nobel Prize in chemistry[18], respond to the oft-asked question of whether he’s an experimental chemist or a theoretical chemist: “The molecules don’t know, nor do they care, do they?”

But scientists, like all human beings, do care about recognition and awards. And so, chemists do mind that the Nobel Prize in chemistry has morphed into the Nobel Prize in chemistry and the life sciences.

black and white head shot of man in early 20th C attire
Jacobus Henricus van ‘t Hoff received the first Nobel Prize in chemistry for 'discovery of the laws of chemical dynamics and osmotic pressure in solutions.’ Universal History Archive/Universal Images Group via Getty Images[19]

Since the Nobel Prizes were first awarded in 1901, the community of scientists and the number of scientific disciplines have grown tremendously. Even today, new disciplines are being created. New journals are appearing. Science is becoming more multidisciplinary and interdisciplinary. Even chemistry as a discipline has grown dramatically, pushing outward its own scholarly boundaries, and chemistry’s achievements continue to be astounding.

The Nobel Prize hasn’t evolved sufficiently with the times[20]. And there just are not enough Nobel Prizes to go around to all the deserving.

I can imagine an additional Nobel Prize for the life sciences. The number of awardees could expand from the current three-per-prize maximum to whatever fits the accomplishment. Nobel Prizes could be awarded posthumously[21] to make up for past serious omissions, an option that was used by the Nobel Foundation for several years and then discontinued.

In truth, the Nobel Foundation has evolved the prizes, but very deliberately and without the major transformations that I think will certainly be required in the future. It will, I believe, eventually break free, figuratively and literally, from the mire of Alfred Nobel’s will and more than a century of distinguished tradition.

When Nobel designed the prizes[22] named after him in the late 1800s and early 1900s, he couldn’t have known that his gift would become a perpetual endowment and have such lasting – indeed, even increasing – significance. Nobel also could not have anticipated the growth of science, nor the fact that over time, some disciplines would fade in importance and new disciplines would evolve.

So far, the extremely competent and highly dedicated scholars at the Nobel Foundation and their partner organizations – and I acknowledge with real appreciation their selfless devotion to the cause – haven’t responded adequately to the growth of the sciences or to the inequities and even incompleteness of past award years. But I have confidence: In time, they will do so.

Read more

Ledura Watkins[1] was 19 years old when he was accused of murdering a public school teacher. At trial, a forensic expert testified that a single hair found at the scene was similar to Watkins’ and stated his conclusion was based on “reasonable scientific certainty.” He explained that he’d conducted thousands of hair analyses and “had never been wrong.”

This one hair was the only physical evidence tying Watkins to the crime. In 1976, Ledura Watkins was convicted of first-degree murder and sentenced to life in prison without the possibility of parole.

Here’s the catch: The expert’s testimony was inappropriate and misleading, and the jury made a mistake. Watkins was innocent. Ledura Watkins lost over 41 years of his life to a wrongful conviction based on improper forensic testimony[2].

Our interdisciplinary[3] team of[4] legal psychologists[5], forensic experts[6] and an attorney[7] worked to develop an educational tool to help jurors avoid making similar mistakes in the future.

Forensic testimony carries weight with jurors

One out of every five wrongful convictions[8] cataloged through September 2023 by the National Registry of Exonerations[9] involved improper forensic evidence.

There is reason to be concerned about jurors’ ability to adequately evaluate forensic evidence. Jurors tend[10] to rely heavily[11] on forensic evidence[12] when making decisions[13] in a case, despite struggling to[14] understand the statistical analyses[15] and language used[16] to explain forensic science. They might ignore the differences between appropriately worded forensic testimony and testimony that violates best-practice guidelines[17], fail to grasp the limitations of forensic science in expert witness testimony and overly rely[18] on an expert’s experience[19] when evaluating the evidence.

Despite all these issues, jurors remain overconfident in their ability[20] to comprehend forensic testimony.

Researchers have long suggested that part of the problem is the way forensic evidence is presented[21] in courtrooms. In response to calls by scientists[22], the U.S. Department of Justice approved the Uniform Language for Testimony and Reports[23] in 2018. These guidelines[24] aimed to lessen misleading statements in forensic testimony and outlined five statements forensic experts should not make. The expert in Ledura Watkins’ case made several of these statements, including claiming that his examination was perfect because of the number of examinations he had conducted.

It’s understandable that jurors are swayed by an expert who uses terms like “error free,” “perfect” or “scientific certainty.” We are interested in finding ways to help people critically evaluate the forensic testimony they hear in court.

An informational video for jurors

Inspired by one court’s use of videos to help train jurors[25] on relevant concepts, our team developed what we call the forensic science informational video. It’s about 4½ minutes long and focuses on latent print examinations, including fingerprints, footwear impressions and tire impressions.

In the FSI video, a narrator explains what a forensic expert is and how they might testify in court. The video describes how latent print examinations are conducted and what types of statements are appropriate – or not – for an expert to make in their testimony, based on the DOJ guidelines.

Mock jurors watched this training video about forensic testimony.

In two different studies, we recruited jury-eligible adults to test whether our video had any effect on how jurors judged forensic testimony.

In our first study, some participants watched the FSI video and others didn’t. Participants who watched the FSI video were more likely to give lower ratings to improper forensic testimony[26] and the forensic expert who gave it.

In our second study, we tested whether the video could help jurors differentiate between low-quality and high-quality testimony[27] without creating a general distrust in forensic evidence. Participants watched a 45-minute mock trial video. Without training from the FSI video, participants rated both low- and high-quality forensic testimony highly. That is, they didn’t differentiate between testimony in which the expert violated three of the DOJ guidelines and testimony that followed the guidelines.

But participants who watched our informational video prior to the mock trial were more likely to differentiate between the low- and high-quality testimony, rating the expert giving low-quality testimony more poorly than the expert giving high-quality testimony.

sign directing juror where to report for their service
In-court instruction can provide everyday citizens with the knowledge they need to make good decisions. Chip Somodevilla via Getty Images[28]

Training helps jurors assess forensic testimony

These findings suggest that our informational video helped mock jurors in two ways. Participants learned how to identify low-quality forensic testimony and how to adjust their evaluations of the expert and their testimony accordingly. Importantly, the video did not cause participants to distrust latent print evidence in general.

Our study is a promising first step in exploring ways to help jurors understand complex forensic testimony. A brief video like ours can provide standardized information about forensic experts and types of appropriate and inappropriate testimony to jurors across courts, much like similar videos about implicit bias[29] already being used in some courts.

We believe a training video has the potential to be easily implemented as an educational tool to improve the quality of jurors’ decision-making. A better understanding of the distinction between proper and improper testimony would improve the justice system by helping jurors fulfill their roles as objective fact-finders – and hopefully prevent wrongful convictions like that of Ledura Watkins.

Read more

More Articles …