Patterns on animal skin, such as zebra stripes and poison frog color patches, serve various biological functions, including temperature regulation[1], camouflage[2] and warning signals[3]. The colors making up these patterns must be distinct and well separated to be effective. For instance, as a warning signal, distinct colors make them clearly visible to other animals. And as camouflage, well-separated colors allow animals to better blend into their surroundings.

In our newly published research in Science Advances, my student Ben Alessio[4] and I[5] propose a potential mechanism[6] explaining how these distinctive patterns form – that could potentially be applied to medical diagnostics and synthetic materials.

A thought experiment can help visualize the challenge of achieving distinctive color patterns. Imagine gently adding a drop of blue and red dye to a cup of water. The drops will slowly disperse throughout the water due to the process of diffusion[7], where molecules move from an area of higher concentration to lower concentration. Eventually, the water will have an even concentration of blue and red dyes and become purple. Thus, diffusion tends to create color uniformity.

A question naturally arises: How can distinct color patterns form in the presence of diffusion?

Movement and boundaries

Mathematician Alan Turing first addressed this question in his seminal 1952 paper, “The Chemical Basis of Morphogenesis[8].” Turing showed that under appropriate conditions, the chemical reactions involved in producing color can interact with each other in a way that counteracts diffusion. This makes it possible for colors to self-organize and create interconnected regions with different colors, forming what are now called Turing patterns.

However, in mathematical models, the boundaries between color regions are fuzzy due to diffusion. This is unlike in nature, where boundaries are often sharp and colors are well separated.

Close-up of head of moray eel with dark brown patches separated by uneven white boundaries.
Moray eels have distinctive patterns on their skin. Asergieiev/iStock via Getty Images[9]

Our team thought a clue to figuring out how animals create distinctive color patterns could be found in lab experiments on micron-sized particles, such as the cells involved in producing the colors[10] of an animal’s skin. My work[11] and work from other labs[12] found that micron-sized particles form banded structures[13] when placed between a region with a high concentration of other dissolved solutes and a region with a low concentration of other dissolved solutes.

Diagram of a large blue circle moving to the right as it's swept along with the medium-sized red circles surrounding it also moving to the right, where there is a higher concentration of small green circles
The blue circle in this diagram is moving to the right due to diffusiophoresis, as it is swept along with the motion of the red circles moving into an area where there are more green circles. Richard Sear/Wikimedia Commons[14], CC BY-SA[15]

In the context of our thought experiment, changes in the concentration of blue and red dyes in water can propel other particles in the liquid to move in certain directions. As the red dye moves into an area where it is at a lower concentration, nearby particles will be carried along with it. This phenomenon is called diffusiophoresis[16].

You benefit from diffusiophoresis whenever you do your laundry[17]: Dirt particles move away from your clothing as soap molecules diffuse out from your shirt and into the water.

Drawing sharp boundaries

We wondered whether Turing patterns composed of regions of concentration differences could also move micron-sized particles. If so, would the resulting patterns from these particles be sharp and not fuzzy?

To answer this question, we conducted computer simulations[18] of Turing patterns – including hexagons, stripes and double spots – and found that diffusiophoresis makes the resulting patterns significantly more distinctive in all cases. These diffusiophoresis simulations were able to replicate the intricate patterns on the skin of the ornate boxfish and jewel moray eel, which isn’t possible through Turing’s theory alone.

This video shows small particles moving due to a related phenomenon called diffusioosmosis.

Further supporting our hypothesis, our model was able to reproduce the findings of a lab study[19] on how the bacterium E. coli moves molecular cargo within themselves. Diffusiophoresis resulted in sharper movement patterns, confirming its role as a physical mechanism behind biological pattern formation.

Because the cells that produce the pigments that make up the colors of an animal’s skin are also micron-sized, our findings suggest that diffusiophoresis may play a key role in creating distinctive color patterns more broadly in nature.

Learning nature’s trick

Understanding how nature programs specific functions can help researchers design synthetic systems that perform similar tasks.

Lab experiments have shown that scientists can use diffusiophoresis to create membraneless water filters[20] and low-cost drug development tools[21].

Our work suggests that combining the conditions that form Turing patterns with diffusiophoresis could also form the basis of artificial skin patches. Just like adaptive skin patterns in animals, when Turing patterns change – say from hexagons to stripes – this indicates underlying differences in chemical concentrations inside or outside the body.

Skin patches that can sense these changes could diagnose medical conditions and monitor a patient’s health by detecting changes in biochemical markers. These skin patches could also sense changes in the concentration of harmful chemicals in the environment.

The work ahead

Our simulations exclusively focused on spherical particles, while the cells that create pigments in skin come in varying shapes. The effect of shape on the formation of intricate patterns remains unclear.

Furthermore, pigment cells move in a complicated biological environment. More research is needed to understand how that environment inhibits motion and potentially freezes patterns in place.

Besides animal skin patterns, Turing patterns are also crucial to other processes such as embryonic development[22] and tumor formation[23]. Our work suggests that diffusiophoresis may play an underappreciated but important role in these natural processes.

Studying how biological patterns form will help researchers move one step closer to mimicking their functions in the lab – an age-old endeavor[24] that could benefit society.

Read more

Water pollution is a growing concern globally, with research estimating[1] that chemical industries discharge 300-400 megatonnes[2] (600-800 billion pounds) of industrial waste into bodies of water each year.

As a team of materials scientists[3], we’re working on an engineered “living material” that may be able to transform chemical dye pollutants from the textile industry[4] into harmless substances.

Water pollution[5] is both an environmental and humanitarian issue that can affect ecosystems and human health alike. We’re hopeful that the materials we’re developing could be one tool available to help combat this problem.

Engineering a living material

The “engineered living material[6]” our team has been working on contains programmed bacteria[7] embedded in a soft hydrogel material. We first published a paper showing the potential effectiveness of this material in Nature Communications[8] in August 2023.

The hydrogel[9] that forms the base of the material has similar properties to Jell-O – it’s soft and made mostly of water. Our particular hydrogel is made from a natural and biodegradable seaweed-based polymer called alginate[10], an ingredient common in some foods[11].

The alginate hydrogel provides a solid physical support for bacterial cells, similar to how tissues support cells[12] in the human body. We intentionally chose this material so that the bacteria we embedded could grow and flourish.

A green polymer, arranged in a square with a 5 by 5 grid of smaller squares, sits on a clear surface.
The grid shape of the material helps the bacteria take in carbon dioxide. David Baillot/UC San Diego Jacobs School of Engineering[13], CC BY-NC-ND[14]

We picked the seaweed-based alginate as the material base because it’s porous and can retain water. It also allows the bacterial cells[15] to take in nutrients from the surrounding environment.

After we prepared the hydrogel, we embedded photosynthetic – or sunlight-capturing – bacteria called cyanobacteria[16] into the gel.

The cyanobacteria embedded in the material still needed to take in light and carbon dioxide to perform photosynthesis[17], which keeps them alive. The hydrogel was porous enough to allow that, but to make the configuration as efficient as possible, we 3D-printed[18] the gel into custom shapes – grids and honeycombs. These structures have a higher surface-to-volume ratio that allow more light, CO₂ and nutrients to come into the material.

The cells were happy in that geometry. We observed higher cell growth and density over time in the alginate gels in the grid or honeycomb structures when compared with the default disc shape.

Cleaning up dye

Like all other bacteria, cyanobacteria has different genetic circuits[19], which tell the cells what outputs to produce. Our team genetically engineered[20] the bacterial DNA[21] so that the cells created a specific enzyme called laccase[22].

The laccase enzyme produced by the cyanobacteria works by performing a chemical reaction with a pollutant that transforms it into a form that’s no longer functional. By breaking the chemical bonds, it can make a toxic pollutant nontoxic. The enzyme is regenerated at the end of the reaction, and it goes off to complete more reactions.

Once we’d embedded these laccase-creating cyanobacteria into the alginate hydrogel, we put them in a solution made up of industrial dye pollutant[23] to see if they could clean up the dye. In this test, we wanted to see if our material could change the structure of the dye so that it went from being colored to uncolored. But, in other cases, the material could potentially change a chemical structure to go from toxic to nontoxic.

The dye we used, indigo carmine[24], is a common industrial wastewater pollutant usually found in the water near textile plants – it’s the main pigment in blue jeans. We found that our material took all the color out of the bulk of the dye over about 10 days.

This is good news, but we wanted to make sure that our material wasn’t adding waste to polluted water by leaching bacterial cells. So, we also engineered the bacteria to produce a protein that could damage the cell membrane of the bacteria – a programmable kill switch.

The genetic circuit was programmed to respond to a harmless chemical, called theophylline[25], commonly found in caffeine, tea and chocolate. By adding theophylline, we could destroy bacterial cells at will.

The field of engineered living materials is still developing, but this just means there are plenty of opportunities to develop new materials with both living and nonliving components.

Read more

The days of having a dictionary on your bookshelf are numbered. But that’s OK, because everyone already walks around with a dictionary – not the one on your phone, but the one in your head.

Just like a physical dictionary, your mental dictionary[1] contains information about words. This includes the letters, sounds and meaning, or semantics, of words, as well as information about parts of speech and how you can fit words together to form grammatical sentences. Your mental dictionary is also like a thesaurus. It can help you connect words and see how they might be similar in meaning, sound or spelling.

Read more …Your mental dictionary is part of what makes you unique − here's how your brain stores and...

a person's hands resting on a laptop that's displaying a cartoon image of a head with an elongated nose that's poking through a representation of text on a page

Misinformation is a key global threat[1], but Democrats and Republicans disagree about how to address the problem. In particular, Democrats and Republicans diverge sharply on removing misinformation from social media.

Only three weeks after the Biden administration announced the Disinformation Governance Board in April 2022, the effort to develop best practices for countering disinformation was halted[2] because of Republican concerns about its mission. Why do Democrats and Republicans have such different attitudes about content moderation?

My colleagues Jennifer Pan[3] and Margaret E. Roberts[4] and I found in a study published in the journal Science Advances that Democrats and Republicans not only disagree about what is true or false, they also differ in their internalized preferences[5] for content moderation. Internalized preferences may be related to people’s moral values, identities or other psychological factors, or people internalizing the preferences of party elites.

And though people are sometimes strategic about wanting misinformation that counters their political views removed, internalized preferences are a much larger factor in the differing attitudes toward content moderation.

Internalized preferences or partisan bias?

In our study, we found that Democrats are about twice as likely as Republicans to want to remove misinformation, while Republicans are about twice as likely as Democrats to consider removal of misinformation as censorship. Democrats’ attitudes might depend somewhat on whether the content aligns with their own political views, but this seems to be due, at least in part, to different perceptions of accuracy.

Previous research showed that Democrats and Republicans have different views[6] about content moderation of misinformation. One of the most prominent explanations is the “fact gap”: the difference in what Democrats and Republicans believe is true or false. For example, a study found that both Democrats and Republicans were more likely to believe news headlines that were aligned with their own political views[7].

But it is unlikely that the fact gap alone can explain the huge differences in content moderation attitudes. That’s why we set out to study two other factors that might lead Democrats and Republicans to have different attitudes: preference gap and party promotion. A preference gap is a difference in internalized preferences about whether, and what, content should be removed. Party promotion is a person making content moderation decisions based on whether the content aligns with their partisan views.

We asked 1,120 U.S. survey respondents who identified as either Democrat or Republican about their opinions on a set of political headlines that we identified as misinformation based on a bipartisan fact check. Each respondent saw one headline that was aligned with their own political views and one headline that was misaligned. After each headline, the respondent answered whether they would want the social media company to remove the headline, whether they would consider it censorship if the social media platform removed the headline, whether they would report the headline as harmful, and how accurate the headline was.

Deep-seated differences

When we compared how Democrats and Republicans would deal with headlines overall, we found strong evidence for a preference gap. Overall, 69% of Democrats said misinformation headlines in our study should be removed, but only 34% of Republicans said the same; 49% of Democrats considered the misinformation headlines harmful, but only 27% of Republicans said the same; and 65% of Republicans considered headline removal to be censorship, but only 29% of Democrats said the same.

Even in cases where Democrats and Republicans agreed that the same headlines were inaccurate, Democrats were nearly twice as likely as Republicans to want to remove the content, while Republicans were nearly twice as likely as Democrats to consider removal censorship.

We didn’t test explicitly why Democrats and Republicans have such different internalized preferences, but there are at least two possible reasons. First, Democrats and Republicans might differ in factors like their moral values[8] or identities[9]. Second, Democrats and Republicans might internalize what the elites in their parties signal. For example, Republican elites have recently framed content moderation as a free speech[10] and censorship[11] issue. Republicans might use these elites’ preferences to inform their own.

When we zoomed in on headlines that are either aligned or misaligned for Democrats, we found a party promotion effect: Democrats were less favorable to content moderation when misinformation aligned with their own views. Democrats were 11% less likely to want the social media company to remove headlines that aligned with their own political views. They were 13% less likely to report headlines that aligned with their own views as harmful. We didn’t find a similar effect for Republicans.

Our study shows that party promotion may be partly due to different perceptions of accuracy of the headlines. When we looked only at Democrats who agreed with our statement that the headlines were false, the party promotion effect was reduced to 7%.

Implications for social media platforms

We find it encouraging that the effect of party promotion is much smaller than the effect of internalized preferences, especially when accounting for accuracy perceptions. However, given the huge partisan differences in content moderation preferences, we believe that social media companies should look beyond the fact gap when designing content moderation policies that aim for bipartisan support.

Future research could explore whether getting Democrats and Republicans to agree on moderation processes[12] – rather than moderation of individual pieces of content – could reduce disagreement. Also, other types of content moderation such as downweighting, which involves platforms reducing the virality of certain content, might prove to be less contentious. Finally, if the preference gap – the differences in deep-seated preferences between Democrats and Republicans – is rooted in value differences, platforms could try to use different moral framings[13] to appeal to people on both sides of the partisan divide.

For now, Democrats and Republicans are likely to continue to disagree over whether removing misinformation from social media improves public discourse or amounts to censorship.

Read more

Have you ever wondered whether the virus that gave you a nasty cold can catch one itself? It may comfort you to know that, yes, viruses can actually get sick. Even better, as karmic justice would have it, the culprits turn out to be other viruses.

Viruses can get sick in the sense that their normal function is impaired. When a virus enters a cell, it can either go dormant or start replicating right away[1]. When replicating, the virus essentially commandeers the molecular factory of the cell to make lots of copies of itself, then breaks out of the cell to set the new copies free.

Sometimes a virus enters a cell only to find that its new temporary dwelling is already home to another dormant virus. Surprise, surprise. What follows is a battle for control of the cell that can be won by either party.

But sometimes a virus will enter a cell to find a particularly nasty shock: a viral tenant waiting specifically to prey on the incoming virus.

I am a bioinformatician[2], and my laboratory[3] studies the evolution of viruses. We frequently run into “viruses of viruses,” but we recently discovered something new: a virus that latches onto the neck of another virus[4].

A world of satellites

Biologists have known of the existence of viruses that prey on other viruses – referred to as viral “satellites”[5] – for decades. In 1973, researchers studying bacteriophage P2, a virus that infects the gut bacterium Escherichia coli, found that this infection sometimes led to two different types of viruses emerging from the cell: phage P2 and phage P4[6].

Bacteriophage P4 is a temperate virus, meaning it can integrate into the chromosome of its host cell and lie dormant. When P2 infects a cell already harboring P4, the latent P4 quickly wakes up and uses the genetic instructions of P2[7] to make hundreds of its own small viral particles. The unsuspecting P2 is lucky to replicate a few times, if at all. In this case, biologists refer to P2 as a “helper” virus, because the satellite P4 needs P2’s genetic material to replicate and spread.

Bacteriophages are viruses that infect bacteria.

Subsequent research has shown that most bacterial species have a diverse set of satellite-helper systems[8], like that of P4-P2. But viral satellites are not limited to bacteria. Shortly after the largest known virus, mimivirus, was discovered in 2003, scientists also found its satellite, which they named Sputnik[9]. Plant viral satellites[10] that lurk in plant cells waiting for other viruses are also widespread and can have important effects on crops[11].

Viral arms race

Although researchers have found satellite-helper viral systems in pretty much every domain of life[12], their importance to biology remains underappreciated. Most obviously, viral satellites have a direct impact on their “helper” viruses, typically maiming them but sometimes making them more efficient killers[13]. Yet that is probably the least of their contributions to biology.

Satellites and their helpers are also engaged in an endless evolutionary arms race[14]. Satellites evolve new ways to exploit helpers and helpers evolve countermeasures to block them. Because both sides are viruses, the results of this internecine war necessarily include something of interest to people: antivirals.

Recent work indicates that many antiviral systems thought to have evolved in bacteria, like the CRISPR-Cas9 molecular scissors used in gene editing, may have originated in phages and their satellites[15]. Somewhat ironically, with their high turnover and mutation rates, helper viruses and their satellites turn out to be evolutionary hot spots for antiviral weaponry[16]. Trying to outsmart each other, satellite and helper viruses have come up with an unparalleled array of antiviral systems for researchers to exploit.

MindFlayer and MiniFlayer

Viral satellites have the potential to transform how researchers understand antiviral strategies, but there is still a lot to learn about them. In our recent work, my collaborators and I describe a satellite bacteriophage completely unlike previously known satellites, one that has evolved a unique, spooky lifestyle[17].

Undergraduate phage hunters[18] at the University of Maryland, Baltimore County isolated a satellite phage called MiniFlayer[19] from the soil bacterium Streptomyces scabiei. MiniFlayer was found in close association with a helper virus called bacteriophage MindFlayer[20] that infects the Streptomyces bacterium. But further research revealed that MiniFlayer was no ordinary satellite.

Microscopy image of a small round virus colored violet attached to the base of a larger round virus colored gray with a long tail
This image shows Streptomyces satellite phage MiniFlayer (purple) attached to the neck of its helper virus, Streptomyces phage MindFlayer (gray). Tagide deCarvalho[21]

MiniFlayer is the first satellite phage known to have lost its ability to lie dormant. Not being able to lie in wait for your helper to enter the cell poses an important challenge to a satellite phage. If you need another virus to replicate, how do you guarantee that it makes it into the cell around the same time you do?

MiniFlayer addressed this challenge with evolutionary aplomb and horror-movie creativity. Instead of lying in wait, MiniFlayer has gone on the offensive. Borrowing from both “Dracula” and “Alien,” this satellite phage evolved a short appendage[22] that allows it to latch onto its helper’s neck like a vampire. Together, the unwary helper and its passenger travel in search of a new host, where the viral drama will unfold again. We don’t yet know how MiniFlayer subdues its helper, or whether MindFlayer has evolved countermeasures.

If the recent pandemic has taught us anything, it is that our supply of antivirals is rather limited[23]. Research on the complex, intertwined and at times predatory nature of viruses and their satellites, like the ability of MiniFlayer to attach to its helper’s neck, has the potential to open new avenues for antiviral therapy.

Read more

A group of women in business dress sit around a conference table

On Oct. 4, 2022, the White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights[1]: A Vision for Protecting Our Civil Rights in the Algorithmic Age. The blueprint launched a conversation about how artificial intelligence innovation can proceed under multiple fair principles. These include safe and effective systems, algorithmic discrimination protections, privacy and transparency.

A growing body of evidence highlights the civil and consumer rights that AI and automated decision-making jeopardize. Communities that have faced the most egregious discrimination historically now face complex and highly opaque forms of discrimination under AI systems. This discrimination occurs in employment, housing, voting, lending, criminal justice, social media, ad tech targeting, surveillance and profiling. For example, there have been cases of AI systems contributing to discrimination against women in hiring and racial discrimination[2] in the criminal justice system.

In the months that followed the blueprint’s release, the arrival of generative AI systems like ChatGPT added urgency to discussions about how best to govern emerging technologies in ways that mitigate risk without stifling innovation.

A year after the blueprint was unveiled, the Biden administration issued a broad executive order[3] on Oct. 30, 2023, titled Safe, Secure, and Trustworthy AI. While much of the order focuses on safety, it incorporates many of the principles in the blueprint.

The order includes several provisions that focus on civil rights and equity. For example, it requires that the federal government develop guidance for federal contractors on how to prevent AI algorithms from being used to exacerbate discrimination. It also calls for training on how best to approach the investigation and prosecution of civil rights violations related to AI and ensure AI fairness throughout the criminal justice system.

The vision laid out in the blueprint has been incorporated in the executive order as guidance for federal agencies. My research in technology and civil rights[4] underscores the importance of civil rights and equity principles in AI regulation.

Civil rights and AI

Civil rights laws often take decades or even lifetimes to advance. Artificial intelligence technology and algorithmic systems are rapidly introducing black box[5] harms such as automated decision-making that may lead to disparate impacts. These include racial bias in facial recognition systems.

These harms are often difficult to challenge, and current civil rights laws and regulations may not be able to address them. This raises the question of how to ensure that civil rights are not compromised as new AI technologies permeate society.

When combating algorithmic discrimination, what does an arc that bends toward justice look like? What does a “Letter from Birmingham Jail[6]” look like when a civil rights activist is protesting not unfair physical detention but digital constraints such as disparate harms from digitized forms of profiling, targeting and surveillance?

The 2022 blueprint was developed under the leadership of Alondra Nelson[7], then acting director[8] of the Office of Science and Technology Policy[9], and her team. The blueprint lays out a series of fair principles that attempt to limit a constellation of harms that AI and automated systems can cause.

Beyond that, the blueprint links the concepts of AI fair principles and AI equity to the U.S. Constitution and the Bill of Rights. By associating these fair principles with civil rights and the Bill of Rights, the dialogue can transition away from a discussion that focuses only on a series of technical commitments, such as making AI systems more transparent. Instead, the discussion can address how the absence of these principles might threaten democracy.

Arati Prabhakar, director of the White House Office of Science and Technology Policy, and Alondra Nelson, former acting director, discussed the Blueprint for an AI Bill of Rights at a conference on the anniversary of its release.

A few months after the release of the blueprint, the U.S. Department of Civil Rights Division, the Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission and the Federal Trade Commission jointly pledged to uphold the U.S.’s commitment[10] to the core principles of fairness, equality and justice as emerging automated systems become increasingly common in daily life. Federal[11] and state legislation[12] has been proposed to combat the discriminatory impact of AI and automated decision-making.

Civil rights organizations take on tech

Multiple civil rights organizations, including the Leadership Conference on Civil and Human Rights[13], have made AI-based discrimination a priority. On Sept. 7, 2023, the Leadership Conference launched[14] a new Center for Civil Rights and Technology[15] and tapped Nelson, author of the Blueprint for an AI Bill of Rights, as an adviser.

Before the release of the new executive order, Sen. Ed Markey, Rep. Pramila Jayapal and other members of Congress sent a letter to the White House urging the administration to incorporate the blueprint’s principles[16] into the anticipated executive order. They said that “the federal government’s commitment to the AI Bill of Rights would show that fundamental rights will not take a back seat in the AI era.”

Numerous civil rights and civil society organizations sent a similar letter to the White House[17], urging the administration to take action on the blueprint’s principles in the executive order.

As the Blueprint for an AI Bill of Rights passed its first anniversary, its long-term impact was unknown. But, true to its title, it presented a vision for protecting civil rights in the algorithmic age. That vision has now been incorporated in the Executive Order on Safe, Secure, and Trustworthy AI. The order can’t be properly understood without this civil rights context.

Read more

a woman speaks at a podium to a room of seated people while a man stands nearby watching

The comprehensive, even sweeping, set of guidelines[1] for artificial intelligence that the White House unveiled in an executive order on Oct. 30, 2023, show that the U.S. government is attempting to address the risks posed by AI.

As a researcher of information systems and responsible AI[2], I believe the executive order represents an important step in building responsible[3] and trustworthy[4] AI.

The order is only a step, however, and it leaves unresolved the issue of comprehensive data privacy legislation. Without such laws, people are at greater risk of AI systems revealing sensitive or confidential information[5].

Understanding AI risks

Technology is typically evaluated for performance, cost and quality[6], but often not equity, fairness and transparency. In response, researchers and practitioners of responsible AI have been advocating for:

The National Institute of Standards and Technology (NIST) issued a comprehensive AI risk management framework[13] in January 2023 that aims to address many of these issues. The framework serves as the foundation[14] for much of the Biden administration’s executive order. The executive order also empowers the Department of Commerce[15], NIST’s home in the federal government, to play a key role in implementing the proposed directives.

Researchers of AI ethics have long cautioned that stronger auditing of AI systems[16] is needed to avoid giving the appearance of scrutiny without genuine accountability[17]. As it stands, a recent study looking at public disclosures from companies found that claims of AI ethics practices outpace actual AI ethics initiatives[18]. The executive order could help by specifying avenues for enforcing accountability.

Another important initiative outlined in the executive order is probing for vulnerabilities of very large-scale general-purpose AI models[19] trained on massive amounts of data, such as the models that power OpenAI’s ChatGPT or DALL-E. The order requires companies that build large AI systems with the potential to affect national security, public health or the economy to perform red teaming[20] and report the results to the government. Red teaming is using manual or automated methods to attempt to force an AI model to produce harmful output[21] – for example, make offensive or dangerous statements like advice on how to sell drugs.

Reporting to the government is important given that a recent study found most of the companies that make these large-scale AI systems lacking[22] when it comes to transparency.

Similarly, the public is at risk of being fooled by AI-generated content. To address this, the executive order directs the Department of Commerce to develop guidance for labeling AI-generated content[23]. Federal agencies will be required to use AI watermarking[24] – technology that marks content as AI-generated to reduce fraud and misinformation – though it’s not required for the private sector.

The executive order also recognizes that AI systems can pose unacceptable risks[25] of harm to civil and human rights[26] and the well-being of individuals: “Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms.”

The U.S. government takes steps to address the risks posed by AI.

What the executive order doesn’t do

A key challenge for AI regulation is the absence of comprehensive federal data protection and privacy legislation. The executive order only calls on Congress to adopt privacy legislation, but it does not provide a legislative framework. It remains to be seen how the courts will interpret the executive order’s directives in light of existing consumer privacy and data rights statutes.

Without strong data privacy laws in the U.S. as other countries have, the executive order could have minimal effect on getting AI companies to boost data privacy. In general, it’s difficult to measure the impact that decision-making AI systems have on data privacy and freedoms[27].

It’s also worth noting that algorithmic transparency is not a panacea. For example, the European Union’s General Data Protection Regulation legislation mandates “meaningful information about the logic involved[28]” in automated decisions. This suggests a right to an explanation of the criteria that algorithms use in their decision-making. The mandate treats the process of algorithmic decision-making as something akin to a recipe book, meaning it assumes that if people understand how algorithmic decision-making works, they can understand how the system affects them[29]. But knowing how an AI system works doesn’t necessarily tell you why it made a particular decision[30].

With algorithmic decision-making becoming pervasive, the White House executive order and the international summit on AI safety[31] highlight that lawmakers are beginning to understand the importance of AI regulation, even if comprehensive legislation is lacking.

Read more

More Articles …