an exterior wall topped by streetlights and bearing writing that says

Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence.

Countries trying to influence each other’s elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the U.S. presidential election. Over the next seven years, a number of countries – most prominently China and Iran – used social media to influence foreign elections, both in the U.S. and elsewhere in the world. There’s no reason to expect 2023 and 2024 to be any different.

But there is a new element: generative AI and large language models. These have the ability to quickly and easily produce endless reams of text on any topic in any tone from any perspective. As a security expert[1], I believe it’s a tool uniquely suited to internet-era propaganda.

This is all very new. ChatGPT was introduced in November 2022. The more powerful GPT-4 was released in March 2023. Other language and image production AIs are around the same age. It’s not clear how these technologies will change disinformation, how effective they will be or what effects they will have. But we are about to find out.

A conjunction of elections

Election season will soon be in full swing[2] in much of the democratic world. Seventy-one percent of people living in democracies will vote in a national election between now and the end of next year. Among them: Argentina and Poland in October, Taiwan in January, Indonesia in February, India in April, the European Union and Mexico in June and the U.S. in November. Nine African democracies, including South Africa, will have elections in 2024. Australia and the U.K. don’t have fixed dates, but elections are likely to occur in 2024.

Many of those elections matter a lot to the countries that have run social media influence operations in the past. China cares a great deal about Taiwan[3], Indonesia[4], India[5] and many African countries[6]. Russia cares about the U.K., Poland, Germany and the EU in general[7]. Everyone cares about the United States.

AI image, text and video generators are already beginning to inject disinformation into elections.

And that’s only considering the largest players. Every U.S. national election from 2016 has brought with it an additional country attempting to influence the outcome. First it was just Russia, then Russia and China, and most recently those two plus Iran. As the financial cost of foreign influence decreases, more countries can get in on the action. Tools like ChatGPT significantly reduce the price of producing and distributing propaganda, bringing that capability within the budget of many more countries.

Election interference

A couple of months ago, I attended a conference with representatives from all of the cybersecurity agencies in the U.S. They talked about their expectations regarding election interference in 2024. They expected the usual players – Russia, China and Iran – and a significant new one: “domestic actors.” That is a direct result of this reduced cost.

Of course, there’s a lot more to running a disinformation campaign than generating content. The hard part is distribution. A propagandist needs a series of fake accounts on which to post, and others to boost it into the mainstream where it can go viral. Companies like Meta have gotten much better at identifying these accounts and taking them down. Just last month, Meta announced[8] that it had removed 7,704 Facebook accounts, 954 Facebook pages, 15 Facebook groups and 15 Instagram accounts associated with a Chinese influence campaign, and identified hundreds more accounts on TikTok, X (formerly Twitter), LiveJournal and Blogspot. But that was a campaign that began four years ago, producing pre-AI disinformation.

Russia has a long history of engaging in foreign disinformation campaigns.

Disinformation is an arms race. Both the attackers and defenders have improved, but also the world of social media is different. Four years ago, Twitter was a direct line to the media, and propaganda on that platform was a way to tilt the political narrative. A Columbia Journalism Review study found that most major news outlets used Russian tweets[9] as sources for partisan opinion. That Twitter, with virtually every news editor reading it and everyone who was anyone posting there, is no more.

Many propaganda outlets moved from Facebook to messaging platforms such as Telegram and WhatsApp, which makes them harder to identify and remove. TikTok is a newer platform that is controlled by China and more suitable for short, provocative videos – ones that AI makes much easier to produce. And the current crop of generative AIs are being connected to tools[10] that will make content distribution easier as well.

Generative AI tools also allow for new techniques of production and distribution, such as low-level propaganda at scale. Imagine a new AI-powered personal account on social media. For the most part, it behaves normally. It posts about its fake everyday life, joins interest groups and comments on others’ posts, and generally behaves like a normal user. And once in a while, not very often, it says – or amplifies – something political. These persona bots[11], as computer scientist Latanya Sweeney calls them, have negligible influence on their own. But replicated by the thousands or millions, they would have a lot more.

Disinformation on AI steroids

That’s just one scenario. The military officers in Russia, China and elsewhere in charge of election interference are likely to have their best people thinking of others. And their tactics are likely to be much more sophisticated than they were in 2016.

Countries like Russia and China have a history of testing both cyberattacks and information operations on smaller countries before rolling them out at scale. When that happens, it’s important to be able to fingerprint these tactics. Countering new disinformation campaigns requires being able to recognize them, and recognizing them requires looking for and cataloging them now.

Even before the rise of generative AI, Russian disinformation campaigns have made sophisticated use of social media.

In the computer security world, researchers recognize that sharing methods of attack and their effectiveness is the only way to build strong defensive systems. The same kind of thinking also applies to these information campaigns: The more that researchers study what techniques are being employed in distant countries, the better they can defend their own countries.

Disinformation campaigns in the AI era are likely to be much more sophisticated than they were in 2016. I believe the U.S. needs to have efforts in place to fingerprint and identify AI-produced propaganda in Taiwan[12], where a presidential candidate claims a deepfake audio recording has defamed him, and other places. Otherwise, we’re not going to see them when they arrive here. Unfortunately, researchers are instead being targeted[13] and harassed[14].

Maybe this will all turn out OK. There have been some important democratic elections in the generative AI era with no significant disinformation issues: primaries in Argentina, first-round elections in Ecuador and national elections in Thailand, Turkey, Spain and Greece. But the sooner we know what to expect, the better we can deal with what comes.

Read more

Parents send their children to school to learn, and they don’t want to worry about whether the air is clean, whether there are insect problems or whether the school’s cleaning supplies could cause an asthma attack.

But a research collaborative, of which I’m a member, has found that schools might not be ready[1] to protect students from environmental contaminants.

I’m an extension specialist[2] focused on pest management. I’m working with a cross-disciplinary team to improve compliance with environmental health standards, and we’ve found that schools across the nation need updates[3] in order to meet minimum code requirements.

Everything from a school’s air and water quality to the safety of the pesticides and cleaning chemicals used there determine the safety of the learning environment. Environmental health standards can help a school community ensure each potential hazard is accounted for.

Air, water and food quality

So, what aspects of the school environment and student health need attention? For one, the air students and teachers breathe every day.

Understanding and controlling common pollutants indoors[4] can improve the indoor air quality and reduce the risk of health concerns[5]. Even small things like dust and dander, dead insects[6] and artificial scents[7] used to cover up smells like mold and mildew can trigger asthma[8] and allergies.

Improving ventilation, as well as a school’s air flow and filtration, can help protect building occupants from respiratory infections and maintain a healthy indoor environment. Ventilation systems[9] bring fresh, outdoor air into rooms, filter or disinfect the air in the room and improve how often air flows in and out of a room.

Upgrading ventilation in school buildings can improve air quality and reduce potential contaminants, including viral particles, in indoor spaces.

A white ceiling with a flourescent light and a large, rectangular vent.
Proper ventilation in schools can reduce pathogen spread and common allergy triggers. Penpak Ngamsathain/Moment[10]

It may seem like maintaining proper food safety and drinking-water quality would be common practices. But many schools do have some level of lead contamination[11] in their food and water.

In 1991, the U.S. Environmental Protection Agency published a regulation, known as the lead and copper rule[12], to minimize lead and copper in drinking water. The EPA’s 2021 revised lead and copper rule[13] aims to reduce the risks of childhood lead exposure by focusing on schools and child care facilities and conducting outreach.

But in December 2022, a team of scientists published a report on lead and copper levels in drinking water[14], and they found evidence that lead is still showing up in drinking water in Massachusetts schools. No amount of lead[15] is safe to have in the water.

To combat contamination and ensure safe food and water, the Food and Drug Administration overhauled the Food Safety Modernization Act[16] in 2016. This act has transformed the nation’s food safety system by shifting the focus from responding to foodborne illnesses to preventing them. It gives local health officials more authority to oversee and enforce supply chain safety.

Per these new regulations[17], every school cafeteria must be inspected by the local registered sanitarian at least twice a year to meet the minimum standards for their state and federal guidelines.

These inspections now include looking for entry points that might allow mice or rats to come in, finding areas with moisture buildup where flies, roaches or other insects can breed, and determining whether storage rooms are properly sanitized.

Integrated pest management

Even if a school has clean air, water and food, it still may not meet all the required health standards[18]. Many schools have insect infestations, and many combat these pest problems with harsh chemicals when there’s a simpler solution.

Integrated pest management[19] is an environmentally sensitive approach[20] to pest management. Known as IPM, it combines commonsense practices like keeping doors and windows closed and making sure no food is left in classrooms overnight with other ways to help prevent pests from coming in.

IPM programs consider the pests’ life cycles and their larger environment, as well as all the available pest control methods, to manage pest infestations economically and scientifically.

Common pests in schools[21] include ants, cockroaches and bedbugs. Ants enter looking for food, and cockroaches can travel in with backpacks or enter through small openings under doors or cracks in the seals around a window. Mice, cockroaches and ants can come into a kitchen or bathroom from plumbing pipes that aren’t properly sealed.

A cockroach standing on a white door trim facing downwards.
Cockroaches can lurk in custodial closets and near drains at schools. Narakhon Somsavangwong/iStock[22]

In the fall, cockroaches reside in custodial closets, kitchens and other areas where floor drains might be. These bugs use the sewer drains to move about, so an IPM approach might include making sure the drains have plenty of water flooding through them and clearing out organic matter that the cockroaches might feed on.

Green cleaning

School administrators also determine what products to use for pest control and cleaning. With the intent to prioritize the safety of both the people inside the building and the environment, some schools have adopted a “green cleaning[23]” approach.

Green cleaning[24] uses safer – or less harsh – chemical and pesticide products, since studies have found[25] that the repeated use of harsh chemicals indoors can lead to chronic health effects later in life for anyone directly exposed.

Products that contain ingredients like hydrogen peroxide[26], citric acid[27] and isopropyl alcohol[28] are generally safer than products that contain chlorine[29] or ammonia[30].

But the school’s job isn’t done, even after the infestation has been dealt with. Schools need a plan to manage their pollutants long term – these pollutants might be cleaning chemicals and pesticides or chemicals used in science classes. Preserving the school’s air quality requires a plan for storage and disposal of these materials. But finding the funds to correctly dispose of legacy chemicals can challenge already thin budgets.

Over the past decade, the U.S. Centers for Disease Control and Prevention has worked with a variety of groups to develop the Whole School, Whole Community, Whole Child[31] initiative. This approach pulls together professionals, community leaders, parents and others to support evidence-based policies and practices.

The initiative has also led some states to develop school health advisory councils[32] that work with state departments of education and health to assist their local school districts with managing the indoor environment and student health.

When the school building is safe, students and educators are more able to get down to the business of learning, undistracted.

Read more

I’ve been primarily an experimental chemist – the kind of person who goes into the laboratory and mixes and stirs chemicals – since the beginning of my career in 1965. Today, and for the past 15 years, I’m a full-time historian of chemistry[1].

Every October, when the announcements are made of that year’s Nobel laureates[2], I examine the results as a chemist. And all too often, I share the same response as many of my fellow chemists: “Who are they? And what did they do?”

One reason for that bewilderment – and disappointment – is that in many recent years, none of my “favorites” or those of my fellow chemists will travel to Stockholm. I am not suggesting that these Nobel laureates[3] are undeserving – quite the opposite. Rather, I am questioning whether some of these awards belong within the discipline of chemistry.

Consider some recent Nobel Prizes. In 2020, Emmanuelle Charpentier and Jennifer A. Doudna received the Nobel Prize “for the development of a method for genome editing[4].” In 2018, Frances H. Arnold received the Nobel Prize “for the directed evolution of enzymes[5],” which she shared with George P. Smith and Sir Gregory P. Winter “for the phage display of peptides and antibodies[6].” In 2015, Tomas Lindahl, Paul Modrich and Aziz Sancar received the Nobel Prize “for mechanistic studies of DNA repair[7].”

All of them received Nobel Prizes in chemistry – not the Nobel Prize in physiology or medicine[8], even though these achievements seem very clearly situated within the disciplines of medicine and the life sciences. There are many other similar examples.

woman and man in formal dress at awards ceremony
2018 co-laureate Frances Arnold receives her Nobel Prize in chemistry from King Carl XVI Gustaf of Sweden. Henrik Montgomery/AFP via Getty Images[9]

These recent mismatches are even clearer when you look further back in time. Consider the 1962 Nobel Prize awarded to Francis Crick, James Watson and Maurice Wilkins “for their discoveries concerning the molecular structure of nucleic acids[10] and its significance for information transfer in living material.” DNA[11], of course, is the most famous nucleic acid, and these three scientists were honored for deciphering how its atoms are bonded together and arranged in their three-dimensional double-helix shape.

While the “structure of DNA” most certainly is an achievement in chemistry, the Nobel Assembly at the Karolinska Institute in Stockholm awarded the Nobel Prize in physiology or medicine to Watson, Crick and Wilkins. Clearly, their Nobel achievements have had great consequences in the life sciences, genetics and medicine. Thus awarding them the Nobel Prize for physiology or medicine is quite appropriate.

metal model of structure of DNA molecule double helix
A model of a DNA molecule using some of Watson and Crick’s original metal plates. Science & Society Picture Library via Getty Images[12]

But note the disconnect. The Nobel Prizes in chemistry in 2020, 2018 and 2015 are more life-science- and medicine-oriented than Watson, Crick and Wilkins’ for the structure of DNA. Yet the former were awarded in chemistry, while the latter was in physiology and medicine.

What is going on? What does this trend reveal about the Nobel Foundation and its award strategies in response to the growth of science?

A gradual evolution in the Nobel Prizes

Several years ago, chemist-historian-applied mathematician Guillermo Restrepo[13] and I collaborated to study the relationship of scientific discipline to the Nobel Prize.

Each year, the Nobel Committee for chemistry studies the nominations[14] and proposes the recipients[15] of the Nobel Prize in chemistry to its parent organization, the Royal Swedish Academy of Sciences, which ultimately selects the Nobel laureates in chemistry (and physics).

We found a strong correlation between the disciplines of the members of the committee and the disciplines of the awardees themselves. Over the lifetime of the Nobel Prizes, there has been a continuous increase – from about 10% in the 1910s to 50% into the 2000s – in the percentage of committee members whose research is best identified within the life sciences.

Restrepo and I concluded[16]: As go the expertise, interests and the disciplines of the committee members, so go the disciplines honored by the Nobel Prizes in chemistry. We also concluded that the academy has intentionally included more and more life scientists on their selection committee for chemistry.

Now some perceptive readers might ask, “Is not the discipline of biochemistry just a subdiscipline of chemistry?” The underlying question is, “How does one define the disciplines in science?”

Restrepo and I reasoned that what we term “intellectual territory” defines the boundaries of a discipline. Intellectual territory can be assessed by bibliographic analysis of the scientific literature. We examined the references, often called citations, that are found in scientific publications. These references are where authors of journal articles cite the related research that’s previously been published – often the research they have relied and built on. We chose to study two journals[17]: a chemistry journal named Angewandte Chemie and a life science journal named, rather aptly, Biochemistry.

We found that the articles in Angewandte Chemie mostly cite articles published in other chemistry journals, and the articles in Biochemistry mostly cite articles in biochemistry and life sciences journals. We also found that the reverse is true: Scientific publications that cite Angewandte Chemie articles are mostly in chemistry journals, and publications that cite Biochemistry articles are mostly in biochemistry and life science journals. In other words, chemistry and the life sciences/biochemistry reside in vastly different intellectual territories that don’t tend to overlap much.

Not letting labels be limiting

But now, perhaps a shocker. Many scientists don’t really care how they are classified by others. Scientists care about science.

As I’ve heard Dudley Herschbach, recipient of the 1986 Nobel Prize in chemistry[18], respond to the oft-asked question of whether he’s an experimental chemist or a theoretical chemist: “The molecules don’t know, nor do they care, do they?”

But scientists, like all human beings, do care about recognition and awards. And so, chemists do mind that the Nobel Prize in chemistry has morphed into the Nobel Prize in chemistry and the life sciences.

black and white head shot of man in early 20th C attire
Jacobus Henricus van ‘t Hoff received the first Nobel Prize in chemistry for 'discovery of the laws of chemical dynamics and osmotic pressure in solutions.’ Universal History Archive/Universal Images Group via Getty Images[19]

Since the Nobel Prizes were first awarded in 1901, the community of scientists and the number of scientific disciplines have grown tremendously. Even today, new disciplines are being created. New journals are appearing. Science is becoming more multidisciplinary and interdisciplinary. Even chemistry as a discipline has grown dramatically, pushing outward its own scholarly boundaries, and chemistry’s achievements continue to be astounding.

The Nobel Prize hasn’t evolved sufficiently with the times[20]. And there just are not enough Nobel Prizes to go around to all the deserving.

I can imagine an additional Nobel Prize for the life sciences. The number of awardees could expand from the current three-per-prize maximum to whatever fits the accomplishment. Nobel Prizes could be awarded posthumously[21] to make up for past serious omissions, an option that was used by the Nobel Foundation for several years and then discontinued.

In truth, the Nobel Foundation has evolved the prizes, but very deliberately and without the major transformations that I think will certainly be required in the future. It will, I believe, eventually break free, figuratively and literally, from the mire of Alfred Nobel’s will and more than a century of distinguished tradition.

When Nobel designed the prizes[22] named after him in the late 1800s and early 1900s, he couldn’t have known that his gift would become a perpetual endowment and have such lasting – indeed, even increasing – significance. Nobel also could not have anticipated the growth of science, nor the fact that over time, some disciplines would fade in importance and new disciplines would evolve.

So far, the extremely competent and highly dedicated scholars at the Nobel Foundation and their partner organizations – and I acknowledge with real appreciation their selfless devotion to the cause – haven’t responded adequately to the growth of the sciences or to the inequities and even incompleteness of past award years. But I have confidence: In time, they will do so.

Read more

Ledura Watkins[1] was 19 years old when he was accused of murdering a public school teacher. At trial, a forensic expert testified that a single hair found at the scene was similar to Watkins’ and stated his conclusion was based on “reasonable scientific certainty.” He explained that he’d conducted thousands of hair analyses and “had never been wrong.”

This one hair was the only physical evidence tying Watkins to the crime. In 1976, Ledura Watkins was convicted of first-degree murder and sentenced to life in prison without the possibility of parole.

Here’s the catch: The expert’s testimony was inappropriate and misleading, and the jury made a mistake. Watkins was innocent. Ledura Watkins lost over 41 years of his life to a wrongful conviction based on improper forensic testimony[2].

Our interdisciplinary[3] team of[4] legal psychologists[5], forensic experts[6] and an attorney[7] worked to develop an educational tool to help jurors avoid making similar mistakes in the future.

Forensic testimony carries weight with jurors

One out of every five wrongful convictions[8] cataloged through September 2023 by the National Registry of Exonerations[9] involved improper forensic evidence.

There is reason to be concerned about jurors’ ability to adequately evaluate forensic evidence. Jurors tend[10] to rely heavily[11] on forensic evidence[12] when making decisions[13] in a case, despite struggling to[14] understand the statistical analyses[15] and language used[16] to explain forensic science. They might ignore the differences between appropriately worded forensic testimony and testimony that violates best-practice guidelines[17], fail to grasp the limitations of forensic science in expert witness testimony and overly rely[18] on an expert’s experience[19] when evaluating the evidence.

Despite all these issues, jurors remain overconfident in their ability[20] to comprehend forensic testimony.

Researchers have long suggested that part of the problem is the way forensic evidence is presented[21] in courtrooms. In response to calls by scientists[22], the U.S. Department of Justice approved the Uniform Language for Testimony and Reports[23] in 2018. These guidelines[24] aimed to lessen misleading statements in forensic testimony and outlined five statements forensic experts should not make. The expert in Ledura Watkins’ case made several of these statements, including claiming that his examination was perfect because of the number of examinations he had conducted.

It’s understandable that jurors are swayed by an expert who uses terms like “error free,” “perfect” or “scientific certainty.” We are interested in finding ways to help people critically evaluate the forensic testimony they hear in court.

An informational video for jurors

Inspired by one court’s use of videos to help train jurors[25] on relevant concepts, our team developed what we call the forensic science informational video. It’s about 4½ minutes long and focuses on latent print examinations, including fingerprints, footwear impressions and tire impressions.

In the FSI video, a narrator explains what a forensic expert is and how they might testify in court. The video describes how latent print examinations are conducted and what types of statements are appropriate – or not – for an expert to make in their testimony, based on the DOJ guidelines.

Mock jurors watched this training video about forensic testimony.

In two different studies, we recruited jury-eligible adults to test whether our video had any effect on how jurors judged forensic testimony.

In our first study, some participants watched the FSI video and others didn’t. Participants who watched the FSI video were more likely to give lower ratings to improper forensic testimony[26] and the forensic expert who gave it.

In our second study, we tested whether the video could help jurors differentiate between low-quality and high-quality testimony[27] without creating a general distrust in forensic evidence. Participants watched a 45-minute mock trial video. Without training from the FSI video, participants rated both low- and high-quality forensic testimony highly. That is, they didn’t differentiate between testimony in which the expert violated three of the DOJ guidelines and testimony that followed the guidelines.

But participants who watched our informational video prior to the mock trial were more likely to differentiate between the low- and high-quality testimony, rating the expert giving low-quality testimony more poorly than the expert giving high-quality testimony.

sign directing juror where to report for their service
In-court instruction can provide everyday citizens with the knowledge they need to make good decisions. Chip Somodevilla via Getty Images[28]

Training helps jurors assess forensic testimony

These findings suggest that our informational video helped mock jurors in two ways. Participants learned how to identify low-quality forensic testimony and how to adjust their evaluations of the expert and their testimony accordingly. Importantly, the video did not cause participants to distrust latent print evidence in general.

Our study is a promising first step in exploring ways to help jurors understand complex forensic testimony. A brief video like ours can provide standardized information about forensic experts and types of appropriate and inappropriate testimony to jurors across courts, much like similar videos about implicit bias[29] already being used in some courts.

We believe a training video has the potential to be easily implemented as an educational tool to improve the quality of jurors’ decision-making. A better understanding of the distinction between proper and improper testimony would improve the justice system by helping jurors fulfill their roles as objective fact-finders – and hopefully prevent wrongful convictions like that of Ledura Watkins.

Read more

When you stroll along a beach, you may look down and spot colorful bits of worn glass mixed in with the sand. But the little treasures you’ve found actually began as discarded trash.

As an environmental science professor[1], I find these gifts from the sea particularly interesting. I have analyzed sand from across the world and added samples, including one of sea glass, into a collection for the environmental, earth and atmospheric sciences at UMass Lowell. The way this trash-turned-treasure washes up on beaches reflects an intersection between human activity and Earth’s natural processes.

A history of glass

Prior to the proliferation of single-use plastics starting in the early 1970s, glass was the container of choice. People in ancient Egypt, Greece and Rome[2] used glass for windows, bottles, plates, bowls and more.

In the mid-20th century, people across the United States had milk bottles delivered to their homes[3], and soda came in glass bottles[4]. After these glass containers served their purpose, users would toss them into a dump.

Before the environmental movements of the 1960s[5], trash dumps in the United States were often left open and exposed to rain and wind. As many of these trash heaps[6] sat near waterways or coves, runoff would wash the trash – including discarded glass bottles – into the ocean.

On their way to the ocean, glass bottles would run into rocks and other objects, which would break the glass into smaller pieces. When these fractured bits traveled close enough to the coast, high tides and incoming waves would wash them out to sea.

Wave action[7] causes these fragments to slide and roll along the sandy seafloor. It’s this movement that rounds the glass’ sharp edges and gives the once smooth and clear glass its pitted, frosted appearance.

Plastic waste litters sand on a beach, with waves seen in the background.
With the shift to single-use plastic, beaches have more plastic waste and less sea glass. AP Photo/Julio Cortez[8]

Sand to glass, then back to the sand

All glass, including sea glass, begins as sand[9], specifically quartz sand[10]. Quartz sand is clear or white – you can see it on many beaches along Florida’s Gulf Coast[11].

To make glass from sand, refiners first purify their quartz sand[12] using both physical and chemical processes to remove all minerals but quartz. They then melt the remaining quartz sand, add a bit of soda ash and limestone to increase the malleability and strength of the glass, and reform it into bottles, bowls, windows and more.

Because quartz is the foundation of all glass, many of the mineral’s characteristics[13] are reflected in sea glass. The most obvious is its clarity – quartz is nearly translucent – but also how quartz fractures or breaks. Quartz fractures tend to be a special type of break, called a conchoidal fracture. This type of fracture begins from a single point and breaks outwardly in a semicircular shape, so that the broken surface kind of looks like the inside of a seashell.

A zoomed in look at sand -- several small rocks of varying colors, from yellow to white to gray.
The yellowish piece of glass pictured in the center has a conchoidal fracture common for quartz. Lori Weeden

Quartz is also highly resistant to chemical weathering[14]. Because sea glass is made from quartz, it tends to break down into smaller fragments, but it won’t weather away quickly.

Most sea glass spends at least a few decades[15] on the seafloor getting tossed around and smoothing its sharp edges in the sand. Some pieces of sea glass are estimated to be hundreds of years old[16] – it’s quartz’s hardiness that allows sea glass to persist in the environment for such a long time.

A global industry

Selling and trading sea glass is a multimillion-dollar industry in the United States, supported by organizations like the North American Sea Glass Association[17] and the International Sea Glass Association[18].

Sea glass jewelry and collections populate craft shows all around the country. There are likely very few beach towns in the United States without a local sea glass jeweler selling custom designs.

With the explosion of single-use plastics[19] as an alternative to glass bottles, sea glass may soon become harder to find[20], with less glass and more plastic in the supply chain.

As sea glass becomes harder to find, some retailers are creating their own artificial sea glass using rock tumblers and chemicals. The difference between the real and artificial beach glass is subtle[21] but still recognizable. Artificial sea glass has a uniformly frosted exterior, without the pitting seen in natural sea glass.

A close up image of sand, which looks like small rocks, with a green, translucent piece of sea glass
Artificial sea glass doesn’t have the same pitted texture as real sea glass. Pictured here in green is real sea glass, with small, textured marks across its surface. Lori Weeden

The public may eventually become less interested in single-use items and turn back to glass[22]. Unlike plastic, glass can be recycled multiple times[23] without losing its integrity, and glass doesn’t have the same environmental impact as microplastics[24].

But because there aren’t many markets for recycled glass and it’s heavy and difficult to transport, it’s not always financially beneficial to recycle glass[25].

However, activists have demanded environmentally friendly alternatives to single-use plastics[26] in recent years. Aluminum bottles and cans are becoming more popular, and glass will remain an alternative to plastic. Unless it’s properly recycled, discarded glass will continue providing sea glass for the next generations to discover.

Read more

Black-and-white photo of six women wearing headphones and wearing old-fashioned clothing plugging cables into sockets

APIs, or application programming interfaces, are the gateways to the digital world. They link a wide array of software applications and systems. APIs facilitate communication between different software systems, and so power everything from social media – think of the share buttons on webpages – to e-commerce transactions.

At a simple level, APIs are like electrical sockets. A software application that you’re using, say the playback controls for a video on a webpage, is like an appliance. The system that provides data or services that the application needs, say YouTube, is like the electrical grid. The API, in this example the YouTube Player API[1], is like the standard electrical outlet that lets any appliance plug in to the grid.

APIs are not really so simple, though. Another analogy is a restaurant. The customer is the software application, the chef is the data or service, and the waiter is the API. The waiter brings the customer the menu, which lists available dishes – i.e., options for accessing data or service – and then brings the customer’s request to the chef.

APIs rely on defined rules and protocols that ensure accurate data exchange and effective collaboration. There are APIs for specific uses and software developer preferences.

APIs power various applications and services across many diverse industries. Facebook, Instagram and Twitter, now rebranded as X, let users share their content across these social media platforms. By leveraging their social media credentials, users can log into websites, weather apps and games to simplify their online experiences. Amazon and PayPal depend on APIs for secure payment processing and efficient order fulfillment. Navigation services like Google Maps leverage APIs to provide real-time location data and accurate directions. Even voice-activated smart assistants like Amazon’s Alexa and Google Assistant use APIs to manage and control smart home devices.

A widely used API is critical for most mobile and web apps.

Who has access to an API also matters. For example, in March 2023, X began charging a wider range of users for access to its data API[2], which lets users collect large numbers of tweets to see what people are tweeting about. Businesses use the API for market and competitive research. But many people with limited resources, like developers of some free apps and social science researchers[3], also rely on it.

APIs are also playing a role in making artificial intelligence widely available. For example, Google[4], Microsoft[5] and OpenAI[6] provide APIs for software developers to incorporate AI in their products.

As APIs continue to shape the digital landscape, developers face challenges. Ensuring the security and privacy of data exchanged through APIs is paramount, given their integration into critical systems. As APIs evolve, managing their complex ecosystems and making sure old programs can use new APIs will be a considerable task.

Read more

Biomedical implants – such as pacemakers, breast implants and orthopedic hardware like screws and plates to replace broken bones – have improved patient outcomes across a wide range of diseases. However, many implants fail[1] because the body rejects them, and they need to be removed because they no longer function and can cause pain or discomfort.

An immune reaction called the foreign body response[2] – where the body encapsulates the implant in sometimes painful scar tissue – is a key driver of implant rejection. Developing treatments that target the mechanisms driving foreign body responses could improve the design and safety of biomedical implants.

I am a biomedical engineer[3] who studies why the body forms scar tissue around medical devices. Along with my colleagues Dharshan Sivaraj[4], Jagan Padmanabhan[5] and Geoffrey Gurtner[6], we wanted to learn more about what causes foreign body responses. In our research, recently published in the journal Nature Biomedical Engineering, we identified a gene[7] that appears to drive this reaction because of the increased stress implants put on the tissues surrounding them.

Many implants need to be replaced because the immune system damages them over time.

Mechanics of implant rejection

Researchers hypothesize that foreign body responses are triggered by the chemical and material composition of the implant. Just as a person can tell the difference between touching something soft like a pillow versus something hard like a table, cells can tell when there are changes to the softness or stiffness of the tissues surrounding them as a result of an implant.

The increased mechanical stress[8] on those cells sends a signal to the immune system that there is a foreign body present. Immune cells activated by mechanical pressure respond by building a capsule made of scar tissue around the implant in an attempt to shield it off. The more severe the immune reaction, the thicker the capsule. This protects the body from getting an infection from injuries like a splinter in your finger.

All biomedical implants cause some level of foreign body response and are surrounded by at least a small capsule. Some people have very strong reactions that result in a large, thick capsule that constricts around the implant, impeding its function and causing pain. Between 10% to 30% of implants[9] need to be removed because of this scar tissue. For example, a neurostimulator could trigger the formation of a dense capsule of scar tissue that inhibits electrical stimulation[10] from properly reaching the nervous system.

To understand why the immune systems of some people build thick capsules around implants while others do not, we gathered capsule samples from 20 patients whose breast implants were removed – 10 who had severe reactions, and 10 who had mild reactions. By genetically analyzing the samples, we found that a gene called RAC2[11] was highly expressed in samples taken from patients with severe reactions but not in those with mild reactions. This gene is found only in immune cells[12], and it codes for a member of a family of proteins[13] involved in cell growth and structure.

Because this protein seemed to be linked to a lot of the downstream reactions that lead to foreign body responses, we decided to explore how RAC2 affects the formation of capsules. We found that immune cells activate RAC2 along with other proteins in response to mechanical stress[14] from implants. These proteins summon additional immune cells to the area that combine into a massive clump[15] to attack a large invader. These combined cells spit out fibrous proteins like collagen that form scar tissue.

Clinician holding a silicone breast implant
The mechanical stress that medical devices like breast implants place on surrounding tissues can trigger a foreign body response. megaflopp/iStock via Getty Images Plus[16]

To confirm RAC2’s role in foreign body responses, we artificially stimulated the mechanical signaling proteins surrounding silicone implants surgically placed in mice. This stimulation produced a severe and humanlike foreign body response in the mice. In contrast, blocking RAC2 resulted in an up to threefold reduction[17] in foreign body responses.

These findings suggest that activating mechanical stress pathways triggers immune cells with RAC2 to generate severe foreign body responses. Blocking RAC2 in immune cells may significantly reduce this reaction.

Developing new treatments

Implant failure is conventionally treated by using biocompatible materials[18] that the body can better tolerate, such as certain polymers. These don’t completely remove the risk of foreign body reactions, however.

My colleagues and I believe that treatments that target the pathways associated with RAC2 could potentially mitigate or prevent free body responses. Heading off this reaction would help improve the effectiveness and safety of medical implants.

Because only immune cells express RAC2[19], a drug designed to block only that gene would theoretically target only immune cells without affecting other cells in the body. Such a drug could also be administered via injection or even coated onto an implant to minimize side effects.

A complete understanding of the molecular mechanisms driving foreign body responses would be the final frontier in developing truly bio-integrative medical devices that could integrate with the body with no problems for the recipient’s entire life span.

Read more

Microscopy image of Vibrio vulnificus

Flesh-eating bacteria sounds like the premise of a bad horror movie, but it’s a growing – and potentially fatal – threat to people.

In September 2023, the Centers for Disease Control and Prevention issued a health advisory[1] alerting doctors and public health officials of an increase in flesh-eating bacteria cases that can cause serious wound infections.

I’m a professor[2] at the Indiana University School of Medicine, where my laboratory[3] studies microbiology and infectious disease[4]. Here’s why the CDC is so concerned about this deadly infection – and ways to avoid contracting it.

What does ‘flesh-eating’ mean?

There are several types of bacteria that can infect open wounds and cause a rare condition called necrotizing fasciitis[5]. These bacteria do not merely damage the surface of the skin – they release toxins that destroy the underlying tissue, including muscles, nerves and blood vessels. Once the bacteria reach the bloodstream, they gain ready access to additional tissues and organ systems. If left untreated, necrotizing fasciitis can be fatal, sometimes within 48 hours.

The bacterial species group A Streptococcus[6], or group A strep, is the most common culprit behind necrotizing fasciitis. But the CDC’s latest warning points to an additional suspect, a type of bacteria called Vibrio vulnificus[7]. There are only 150 to 200 cases[8] of Vibrio vulnificus in the U.S. each year, but the mortality rate is high, with 1 in 5 people succumbing to the infection.

Climate change may be driving the rise in flesh-eating bacteria infections in the U.S.

How do you catch flesh-eating bacteria?

Vibrio vulnificus primarily lives in warm seawater but can also be found in brackish water – areas where the ocean mixes with freshwater. Most infections in the U.S. occur in the warmer months, between May and October[9]. People who swim, fish or wade in these bodies of water can contract the bacteria through an open wound or sore.

Vibrio vulnificus can also get into seafood harvested from these waters, especially shellfish like oysters. Eating such foods raw or undercooked can lead to food poisoning[10], and handling them while having an open wound can provide an entry point for the bacteria to cause necrotizing fasciitis. In the U.S., Vibrio vulnificus is a leading cause of seafood-associated fatality[11].

Why are flesh-eating bacteria infections rising?

Vibrio vulnificus is found in warm coastal waters around the world. In the U.S., this includes southern Gulf Coast states. But rising ocean temperatures due to global warming are creating new habitats for this type of bacteria, which can now be found along the East Coast as far north as New York and Connecticut[12]. A recent study[13] noted that Vibrio vulnificus wound infections increased eightfold between 1988 and 2018 in the eastern U.S.

Climate change[14] is also fueling stronger hurricanes and storm surges, which have been associated with spikes in flesh-eating bacteria infection cases.

Aside from increasing water temperatures, the number of people who are most vulnerable to severe infection[15], including those with diabetes[16] and those taking medications that suppress immunity, is on the rise.

What are symptoms of necrotizing fasciitis? How is it treated?

Early symptoms[17] of an infected wound include fever, redness, intense pain or swelling at the site of injury. If you have these symptoms, seek medical attention without delay. Necrotizing fasciitis can progress quickly[18], producing ulcers, blisters, skin discoloration and pus.

Treating flesh-eating bacteria[19] is a race against time. Clinicians administer antibiotics directly into the bloodstream to kill the bacteria. In many cases, damaged tissue needs to be surgically removed to stop the rapid spread of the infection. This sometimes results in amputation[20] of affected limbs.

Researchers are concerned that an increasing number of cases are becoming impossible to treat because Vibrio vulnificus has evolved resistance to certain antibiotics[21].

Necrotizing fasciitis is rare but deadly.

How do I protect myself?

The CDC offers several recommendations to help prevent infection[22].

People who have a fresh cut, including a new piercing or tattoo, are advised to stay out of water that could be home to Vibrio vulnificus. Otherwise, the wound should be completely covered with a waterproof bandage.

People with an open wound should also avoid handling raw seafood or fish. Wounds that occur while fishing, preparing seafood or swimming should be washed immediately and thoroughly with soap and water.

Anyone can contract necrotizing fasciitis, but people with weakened immune systems are most susceptible to severe disease[23]. This includes people taking immunosuppressive medications or those who have pre-existing conditions such as liver disease, cancer, HIV or diabetes.

It is important to bear in mind that necrotizing fasciitis presently remains very rare[24]. But given its severity, it is beneficial to stay informed.

Read more

Curious Kids[1] is a series for children of all ages. If you have a question you’d like an expert to answer, send it to This email address is being protected from spambots. You need JavaScript enabled to view it.[2]. Why does a plane look and feel like it’s moving more slowly than it actually is? – Finn F., age 8, Concord, Massachusetts A passenger jet flies[3] at about 575 mph once it’s at cruising altitude. That’s nearly nine times faster than a car might typically be cruising on the highway. So why does a plane in flight look like it’s just inching across the sky?I am an aerospace educator[4] who relies on the laws of physics when teaching about aircraft. These same principles of physics help explain why looks can be deceiving when it comes to how fast an object is moving.Moving against a featureless backgroundIf you watch a plane accelerating toward takeoff, it appears to be moving very quickly. It’s not until the plane is in the air and has reached cruising altitude that it appears to be moving very slowly. That’s because there is often no independent reference point when the plane is in the sky.A reference point is a way to measure the speed of the airplane. If there are no contrails[5] or clouds surrounding it, the plane is moving against a completely uniform blue sky. This can make it very hard to perceive just how fast a plane is moving. And because the plane is far away, it takes longer for it to move across your field of vision compared to an object that is close to you. This further creates the illusion that it is moving more slowly than it actually is.These factors explain why a plane looks like it’s going more slowly than it is. But why does it feel that way, too?A passenger’s perception on the planeA plane feels like it’s traveling more slowly than it is because, just like when you look up at a plane in the sky, as a passenger on a plane, you have no independent reference point. You and the plane are moving at the same speed, which can make it difficult to perceive your rate of motion relative to the ground beneath you. This is the same reason why it can be hard to tell that you are driving quickly on a highway that is surrounded only by empty fields with no trees.
Perspective from a plane window of the plane's shadow against a brown field with the plane's white wing visible on the left side.
Watching the speed of a plane’s shadow can help you assess how quickly a plane is moving. Saul Loeb/AFP via GettyImages[6] However, there are a couple of ways you might be able to understand just how fast you are moving.Can you see the plane’s shadow[7] on the ground? It can give you perspective on how fast the plane is moving relative to the ground. If you are lucky enough to spot it, you will be amazed at how fast the plane’s shadow passes over buildings and roads. You can get a real sense of the 575 mph average speed of a cruising passenger plane. Another way to understand how fast you are moving is to note how fast thin, spotty cloud cover moves over the wing. This reference point gives you another way to “see” or perceive your speed. Remember though, that clouds aren’t typically stationary[8]; they’re just moving very slow relative to the plane.
An airplane passes over thin, spotty cloud cover.
Although it can be difficult to discern just how fast a plane is actually moving, using reference points to gain perspective can help tremendously.Has your interest in aviation been sparked? If so, there are a lot of great career opportunities in aeronautics[9].Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to This email address is being protected from spambots. You need JavaScript enabled to view it.[10]. Please tell us your name, age and the city where you live.And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

Read more

Text saying: Uncommon Courses, from The Conversation
Uncommon Courses[1] is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching. Title of course:The Design of Coffee: An Introduction to Chemical EngineeringWhat prompted the idea for the course?In 2012, my colleague professor Tonya Kuhl and I were drinking coffee and brainstorming how to improve our senior-level laboratory course in chemical engineering. Tonya looked at her coffee and suggested, “How about we have the students reverse-engineer a Mr. Coffee drip brewer to see how it works?” A light bulb went off in my head, and I said, “Why not make a whole course about coffee to introduce lots of students to chemical engineering?” And that’s what we did. We developed The Design of Coffee as a freshman seminar for 18 students in 2013, and, since then, the course has grown to over 2,000 general education students per year at the University of California, Davis.
A student wearing a flannel shirt uses a white microscope, with a pile of coffee beans and a metal scoop sitting next to them on the table.
A student uses a microscope to look at coffee beans in The Design of Coffee lab. UC Davis What does the course explore?The course focus is hands-on experiments with roasting, brewing and tasting in our coffee lab. For example, students measure the energy they use while roasting to illustrate the law of conservation of energy[2], they measure how the pH of the coffee[3] changes after brewing to illustrate the kinetics of chemical reactions, and they measure how the total dissolved solids[4] in the brewed coffee relates to time spent brewing to illustrate the principle of mass transfer[5]. The course culminates in an engineering design contest, where the students compete to make the best-tasting coffee using the least amount of energy. It’s a classic engineering optimization problem, but one that is broadly accessible – and tasty.Why is this course relevant now?Coffee plays a huge role in culture[6], diet[7] and the U.S.[8] and global economy[9]. But historically, relatively little academic work has focused on coffee. There are entire academic programs on wine and beer at many major universities, but almost none on coffee.
A student wearing a black UC Davis sweatshirt holds a glass cup of coffee
Many students who don’t like coffee develop a taste for it over the course of the class. UC Davis The Design of Coffee helps fill a huge unmet demand because students are eager to learn about the beverage that they already enjoy. Perhaps most surprisingly, many of our students enter the course professing to hate coffee, but by the end of the course they are roasting and brewing their own coffee beans at home.What’s a critical lesson from the course?Many students are shocked to learn that black coffee can have fruity, floral or sweet flavors[10] without adding any sugar or syrups. The most important lesson from the course is that engineering is really a quantitative way to think about problem-solving. For example, if the problem to solve is “make coffee taste sweet without adding sugar,” then an engineering approach provides you with a tool set to tackle that problem quantitatively and rigorously. What materials does the course feature?Tonya and I originally self-published our lab manual, The Design of Coffee: An Engineering Approach[11], to keep prices low for our students. Now in its third edition, it has sold more than 15,000 copies and has been translated to Spanish[12], with Korean and Indonesian translations on the way.What will the course prepare students to do?Years ago, a student in our class told the campus newspaper, “I had no idea there was an engineering way to think about coffee!” Our main goal is to teach students that there is an engineering way to think about anything. The engineering skills and mindset we teach equally prepare students to design a multimillion-dollar biofuel refinery, a billion-dollar pharmaceutical production facility or, most challenging of all, a naturally sweet and delicious $3 cup of coffee. Our course is the first step in preparing students to tackle these problems, as well as new problems that no one has yet encountered.

Read more

In an exciting milestone for lunar scientists around the globe[1], India’s Chandrayaan-3 lander[2] touched down 375 miles (600 km)[3] from the south pole of the Moon[4] on Aug. 23, 2023.

In just under 14 Earth days, Chandrayaan-3 provided scientists with valuable new data and further inspiration to explore the Moon[5]. And the Indian Space Research Organization[6] has shared these initial results[7] with the world.

While the data from Chandrayaan-3’s rover[8], named Pragyan, or “wisdom” in Sanskrit, showed the lunar soil[9] contains expected elements such as iron, titanium, aluminum and calcium, it also showed an unexpected surprise – sulfur[10].

India’s lunar rover Pragyan rolls out of the lander and onto the surface.

Planetary scientists like me[11] have known that sulfur exists in lunar rocks and soils[12], but only at a very low concentration. These new measurements imply there may be a higher sulfur concentration than anticipated.

Pragyan has two instruments that analyze the elemental composition of the soil – an alpha particle X-ray spectrometer[13] and a laser-induced breakdown spectrometer[14], or LIBS[15] for short. Both of these instruments measured sulfur in the soil near the landing site.

Sulfur in soils near the Moon’s poles might help astronauts live off the land one day, making these measurements an example of science that enables exploration.

Geology of the Moon

There are two main rock types[16] on the Moon’s surface[17] – dark volcanic rock and the brighter highland rock. The brightness difference[18] between these two materials forms the familiar “man in the moon[19]” face or “rabbit picking rice” image to the naked eye.

The Moon, with the dark regions outlined in red, showing a face with two ovals for eyes and two shapes for the nose and mouth.
The dark regions of the Moon have dark volcanic soil, while the brighter regions have highland soil. Avrand6/Wikimedia Commons[20], CC BY-SA[21]

Scientists measuring lunar rock and soil compositions in labs on Earth have found that materials from the dark volcanic plains tend to have more sulfur[22] than the brighter highlands material.

Sulfur mainly comes from[23] volcanic activity. Rocks deep in the Moon contain sulfur, and when these rocks melt, the sulfur becomes part of the magma. When the melted rock nears the surface, most of the sulfur in the magma becomes a gas that is released along with water vapor and carbon dioxide.

Some of the sulfur does stay in the magma and is retained within the rock after it cools. This process explains why sulfur is primarily associated with the Moon’s dark volcanic rocks.

Chandrayaan-3’s measurements of sulfur in soils are the first to occur on the Moon. The exact amount of sulfur cannot be determined until the data calibration is completed.

The uncalibrated data[24] collected by the LIBS instrument on Pragyan suggests that the Moon’s highland soils near the poles might have a higher sulfur concentration than highland soils from the equator and possibly even higher than the dark volcanic soils.

These initial results give planetary scientists like me[25] who study the Moon new insights into how it works as a geologic system. But we’ll still have to wait and see if the fully calibrated data from the Chandrayaan-3 team confirms an elevated sulfur concentration.

Atmospheric sulfur formation

The measurement of sulfur is interesting to scientists for at least two reasons. First, these findings indicate that the highland soils at the lunar poles could have fundamentally different compositions, compared with highland soils at the lunar equatorial regions. This compositional difference likely comes from the different environmental conditions between the two regions – the poles get less direct sunlight.

Second, these results suggest that there’s somehow more sulfur in the polar regions. Sulfur concentrated here could have formed[26] from the exceedingly thin lunar atmosphere.

The polar regions of the Moon receive less direct sunlight and, as a result, experience extremely low temperatures[27] compared with the rest of the Moon. If the surface temperature falls, below -73 degrees C (-99 degrees F), then sulfur from the lunar atmosphere could collect on the surface in solid form – like frost on a window.

Sulfur at the poles could also have originated from ancient volcanic eruptions[28] occurring on the lunar surface, or from meteorites containing sulfur that struck the surface and vaporized on impact.

Lunar sulfur as a resource

For long-lasting space missions, many agencies have thought about building some sort of base on the Moon[29]. Astronauts and robots could travel from the south pole base to collect, process, store and use naturally occurring materials like sulfur on the Moon – a concept called in-situ resource utilization[30].

In-situ resource utilization means fewer trips back to Earth to get supplies and more time and energy spent exploring. Using sulfur as a resource, astronauts could build solar cells and batteries that use sulfur, mix up sulfur-based fertilizer and make sulfur-based concrete for construction[31].

Sulfur-based concrete[32] actually has several benefits compared with the concrete normally used in building projects on Earth[33].

For one, sulfur-based concrete hardens and becomes strong within hours rather than weeks, and it’s more resistant to wear[34]. It also doesn’t require water in the mixture, so astronauts could save their valuable water for drinking, crafting breathable oxygen and making rocket fuel.

The gray surface of the Moon as seen from above, with a box showing the rover's location in the center.
The Chandrayaan-3 lander, pictured as a bright white spot in the center of the box. The box is 1,108 feet (338 meters) wide. NASA/GSFC/Arizona State University

While seven missions[35] are currently operating on or around the Moon, the lunar south pole region[36] hasn’t been studied from the surface before, so Pragyan’s new measurements will help planetary scientists understand the geologic history of the Moon. It’ll also allow lunar scientists like me to ask new questions about how the Moon formed and evolved.

For now, the scientists at Indian Space Research Organization are busy processing and calibrating the data. On the lunar surface, Chandrayaan-3 is hibernating through the two-week-long lunar night, where temperatures will drop to -184 degrees F (-120 degrees C). The night will last until September 22.

There’s no guarantee that the lander component of Chandrayaan-3, called Vikram, or Pragyan will survive the extremely low temperatures, but should Pragyan awaken, scientists can expect more valuable measurements.

Read more

More Articles …