The Power of Truth® has been released for sale and assignment to a conservative pro-American news outlet, cable network, or other media outlet that wants to define and brand its operation as the bearer of the truth, and set itself above the competition.

In every news story the audience hears of censorship, speech, and the truth. The Power of Truth® has significant value to define an outlet, and expand its audience. A growing media outlet may decide to rebrand their operation The Power of Truth®. An established outlet may choose to make it the slogan distinguishing their operation from the competition. You want people to think of your outlet when they hear it, and think of the slogan when they see your company name. It is the thing which answers the consumer's questions: Why should I choose you? Why should I listen to you? Think:

  • What’s in your wallet -- Capital One
  • The most trusted name in news – CNN
  • Fair and balanced - Fox News
  • Where’s the beef -- Wendy’s
  • You’re in good hands -- Allstate
  • The ultimate driving machine -- BMW

The Power of Truth® is registered at the federal trademark level in all applicable trademark classes, and the sale and assignment includes the applicable domain names. The buyer will have both the trademark and the domains so that it will control its business landscape without downrange interference.

Contact: Truth@ThePowerOfTruth.com

man in suit walking with blurry law enforcement officer in foreground

Prominent cases of purported lying continue to dominate the news cycle. Hunter Biden was charged with lying on a government form[1] while purchasing a handgun. Republican Representative George Santos allegedly lied in many ways[2], including to donors through a third party in order to misuse the funds raised. The rapper Offset admitted to lying on Instagram[3] about his wife, Cardi B, being unfaithful.

There are a number of variables that distinguish these cases. One is the audience: the faceless government, particular donors and millions of online followers, respectively. Another is the medium used to convey the alleged lie: on a bureaucratic form, through intermediaries and via social media.

Differences like these lead researchers like me to wonder what factors influence the telling of lies. Does a personal connection increase or decrease the likelihood of sticking to the truth? Are lies more prevalent on text or email than on the phone or in person?

An emerging body of empirical research is trying to answer these questions, and some of the findings are surprising. They hold lessons, too - for how to think about the areas of your life where you might be more prone to tell lies, and also about where to be most cautious in trusting what others are saying. As the recent director of The Honesty Project[4] and author of “Honesty: The Philosophy and Psychology of a Neglected Virtue[5],” I am especially interested in whether most people tend to be honest or not.

Figuring out the frequency of lies

Most research on lying asks participants to self-report their lying behavior, say during the past day or week. (Whether you can trust liars to tell the truth about lying is another question.)

The classic study on lying frequency was conducted by psychologist Bella DePaulo[6] in the mid-1990s. It focused on face-to-face interactions and used a group of student participants and another group of volunteers from the community around the University of Virginia. The community members averaged one lie per day[7], while the students averaged two lies per day. This result became the benchmark finding in the field of honesty research and helped lead to an assumption among many researchers that lying is commonplace[8].

But averages do not describe individuals. It could be that each person in the group tells one or two lies per day. But it’s also possible that there are some people who lie voraciously and others who lie very rarely.

In an influential 2010 study, this second scenario is indeed what Michigan State University communication researcher Kim Serota[9] and his colleagues found. Out of 1,000 American participants, 59.9% claimed not to have told a single lie[10] in the past 24 hours. Of those who admitted they did lie, most said they’d told very few lies. Participants reported 1,646 lies in total, but half of them came from just 5.3% of the participants.

This general pattern in the data has been replicated[11] several times. Lying tends to be rare, except in the case of a small group of frequent liars.

Does the medium make a difference?

Might lying become more frequent under various conditions? What if you don’t just consider face-to-face interactions, but introduce some distance by communicating via text, email or the phone?

Research suggests the medium doesn’t matter much. For instance, a 2014 study by Northwestern University communication researcher Madeline Smith[12] and her colleagues found that when participants were asked to look at their 30 most recent text messages, 23% said there were no deceptive texts[13]. For the rest of the group, the vast majority said that 10% or fewer of their texts contained lies.

Recent research by David Markowitz at the University of Oregon successfully replicated earlier findings that had compared the rates of lying using different technologies[14]. Are lies more common on text, the phone or on email? Based on survey data from 205 participants, Markowitz found that on average, people told 1.08 lies per day[15], but once again with the distribution of lies skewed by some frequent liars.

Not only were the percentages fairly low, but the differences between the frequency with which lies were told via different media were not large. Still, it might be surprising to find that, say, lying on video chat was more common than lying face-to-face, with lying on email being least likely.

A couple of factors could be playing a role[16]. Recordability seems to rein in the lies – perhaps knowing that the communication leaves a record raises worries about detection and makes lying less appealing. Synchronicity seems to matter too. Many lies occur in the heat of the moment, so it makes sense that when there’s a delay in communication, as with email, lying would decrease.

Does the audience change things?

In addition to the medium, does the intended receiver of a potential lie make any difference?

Initially you might think that people are more inclined to lie to strangers than to friends and family, given the impersonality of the interaction in the one case and the bonds of care and concern in the other. But matters are a bit more complicated.

In her classic work, DePaulo found that people tend to tell what she called “everyday lies” more often to strangers than family members[17]. To use her examples, these are smaller lies like “told her (that) her muffins were the best ever” and “exaggerated how sorry I was to be late.” For instance, DePaulo and her colleague Deborah Kashy reported that participants in one of their studies lied less than once per 10 social interactions[18] with spouses and children.

However, when it came to serious lies about things like affairs or injuries, for instance, the pattern flipped. Now, 53% of serious lies were to close partners[19] in the study’s community participants, and the proportion jumped up to 72.7% among student volunteers. Perhaps not surprisingly, in these situations people might value not damaging their relationships more than they value the truth. Other data also finds participants tell more lies to friends and family members[20] than to strangers.

Investigating the truth about lies

It is worth emphasizing that these are all initial findings. Further replication is needed, and cross-cultural studies using non-Western participants are scarce. Additionally, there are many other variables that could be examined, such as age, gender, religion and political affiliation.

When it comes to honesty, though, I find the results, in general, promising. Lying seems to happen rarely for many people, even toward strangers and even via social media and texting. Where people need to be especially discerning, though, is in identifying – and avoiding – the small number of rampant liars out there. If you’re one of them yourself, maybe you never realized that you’re actually in a small minority.

Read more

What does it mean to be a good thinker? Recent research suggests that acknowledging you can be wrong plays a vital role.

I had these studies in mind a few months ago when I was chatting with a history professor about a class she was teaching to first-year students here at Wake Forest University. As part of my job as a psychology professor who researches character[1] – basically, what it means to be a good person – I often talk to my colleagues about how our teaching can develop the character of our students.

In this case, my colleague saw her class as an opportunity to cultivate character traits that would allow students to respectfully engage with and learn from others when discussing contentious topics. Wanting to learn about and understand the world is a distinctive human motivation[2]. As teachers, we want our students to leave college with the ability and motivation to understand and learn more about themselves, others and their world. She wondered: Was there one characteristic or trait that was most important to cultivate in her students?

I suggested she should focus on intellectual humility[3]. Being intellectually humble means being open to the possibility you could be wrong about your beliefs.

But is being humble about what you know or don’t know enough?

I now think my recommendation was incorrect. It turns out good thinking requires more than intellectual humility – and, yes, I see the irony that admitting this means I had to draw on my own intellectual humility.

view from behind of students walking on campus in fall
To be ready to learn, you need to acknowledge that what you currently believe could be wrong. vm/iStock via Getty Images Plus[4]

Acknowledging you might not be right

One reason for my focus on intellectual humility was that without acknowledging the possibility that your current beliefs may be mistaken, you literally can’t learn anything new. While being open to being wrong is generally quite challenging – especially for first-year university students confronting the limits of their understanding – it is arguably the key first step in learning.

But another reason for my response is that research on intellectual humility has exploded[5] in the past 10 years. Psychologists now have many different ways[6] to assess intellectual humility. Social scientists know that possessing a high level of intellectual humility is associated with multiple positive outcomes, like having more empathy[7], more prosocial behavior[8], reduced susceptibility to misinformation[9] and an increased inclination to seek compromise[10] in challenging interpersonal disagreements.

If you want to focus on one trait to promote good thinking, it seems that intellectual humility is hard to beat. Indeed, researchers, including those in my own lab[11], are now testing interventions to promote it among different populations.

A single trait won’t make you a good thinker

However, was I right in recommending just a single trait? Is intellectual humility by itself enough to promote good thinking? When you zoom out to consider what is really involved in being a good thinker, it becomes clear that simply acknowledging that one could be wrong is not enough.

To provide an example, perhaps someone is willing to acknowledge that they could be wrong because “whatever, man.” They didn’t have particularly strong convictions to begin with. In other words, it’s not enough to say you’re mistaken about your beliefs. You also need to care about having the right beliefs.

While part of being a good thinker involves recognizing one’s possible ignorance, it also requires an eagerness to learn, curiosity about the world, and a commitment to getting it right.

What other traits, then, should people strive to cultivate? The philosopher Nate King writes that being a good thinker involves possessing multiple traits[12], including intellectual humility, but also intellectual firmness, love of knowledge, curiosity, carefulness and open-mindedness.

Being a good thinker involves confronting multiple challenges beyond being humble about what you know. You also need to:

  • Be sufficiently motivated to figure out what’s true.
  • Focus on the pertinent information and carefully seek it out.
  • Be open-minded when considering information that you may disagree with.
  • Confront information or questions that are novel or different from what you’re generally used to engaging with.
  • Be willing to put in the effort to figure it all out.

This is a lot, but philosopher Jason Baehr writes that possessing good intellectual character requires successfully addressing all these challenges[13].

three students looking at textbooks in library
Good intellectual character depends on more than one key trait. Tashi-Delek/E+ via Getty Images[14]

Additional ingredients for good thinking

So, I was wrong to say that intellectual humility was the silver bullet that can teach students how to think well. Indeed, being intellectually humble – in a way that promotes good thinking – likely involves being both curious and open-minded about new information.

Focusing on a single characteristic such as intellectual humility rather than the totality of intellectual character ends up promoting lopsided character development, similar to that of a bodybuilder focusing their efforts on one bicep rather than their whole body[15].

My lab’s current work is now attempting to address this issue by defining the good thinker in terms of multiple intellectual traits. This approach is similar to work in personality science that has identified key traits of people who are psychologically healthy as well as those whose patterns of thinking, feeling and behaving cause enduring distress or problems. We hope to further understand how good thinkers function in daily life[16] – for example, their personality, the quality of their relationships and their well-being – as well as how their intellectual character influences their thinking, behavior and sense of identity[17].

I think this work[18] is vital in order to understand the key characteristics of good thinking and to learn more about how to build these habits in ourselves and others.

Read more

If you live on the East Coast, you may have driven through roundabouts in your neighborhood countless times. Or maybe, if you’re in some parts farther west, you’ve never encountered one of these intersections. But roundabouts, while a relatively new traffic control measure, are catching on across the United States[1].

Roundabouts, also known as traffic circles or rotaries, are circular intersections[2] designed to improve traffic flow and safety. They offer several advantages over conventional intersections controlled by traffic signals or stop signs, but by far the most important one is safety.

A bird's-eye view of a roundabout, with a pink circular center with grass in the middle, and four roads converging from north, south, east and west.
Modern roundabouts can have one or two lanes, and usually have four exit options. AP Photo/Alex Slitz[3]

I research transportation engineering[4], particularly traffic safety and traffic operations. Some of my past studies[5] have examined the safety and operational effects of installing roundabouts at an intersection. I’ve also compared the performance of roundabouts versus stop-controlled intersections.

A brief history of roundabouts

As early as the 1700s, some city planners proposed and even constructed circular places, sites where roads converged, like the Circus[6] in Bath, England, and the Place Charles de Gaulle[7] in France. In the U.S., architect Pierre L'Enfant built several into his design for Washington, D.C.[8]. These circles were the predecessors to roundabouts.

In 1903, French architect and influential urban planner Eugène Hénard was one of the first people who introduced the idea[9] of moving traffic in a circle[10] to control busy intersections in Paris[11].

Around the same time, William Phelps Eno[12], an American businessman known as the father of traffic safety and control, also proposed roundabouts to alleviate traffic congestion in New York City[13].

In the years that followed, a few other cities tried out a roundabout-like design, with varying levels of success[14]. These roundabouts didn’t have any sort of standardized design guidelines, and most of them were too large to be effective and efficient, as vehicles would enter at higher speeds without always yielding.

The birth of the modern roundabout[15] came with yield-at-entry regulations, adopted in some towns in Great Britain in the 1950s. With yield-at-entry regulations, the vehicles entering the roundabout had to give way to vehicles already circulating in the roundabout. This was made a rule nationwide in the United Kingdom in 1966, then in France in 1983.

Yield-at-entry meant vehicles drove through these modern roundabouts more slowly, and over the years, engineers began adding more features that made them look closer to how roundabouts do now. Many added pedestrian crossings and splitter islands – or raised curbs where vehicles entered and exited – which controlled the vehicles’ speeds[16].

Engineers, planners and decision-makers worldwide noticed that these roundabouts improved traffic flow, reduced congestion and improved safety at intersections. Roundabouts then spread throughout Europe and Australia[17].

Three decades later, modern roundabouts came to North America. The first modern roundabout[18] in the U.S. was built in Summerlin, on the west side of Las Vegas[19], in 1990.

Roundabouts require the driver to yield before entering and signal before exiting.

Ever since, the construction of modern roundabouts in the U.S. has picked up steam. There are now about 10,000 roundabouts in the country[20].

Why use roundabouts?

Roundabouts likely caught on so quickly because they reduce the number of potential conflict points[21]. A conflict point at an intersection is a location where the paths of two or more vehicles or road users cross or have the potential to cross. The more conflict points, the more likely vehicles are to crash.

A roundabout has only eight potential conflict points, compared to 32 at a conventional four-way intersection[22]. At roundabouts, vehicles don’t cross each other at a right angle, and there are fewer points where vehicles merge or diverge into or away from each other.

The roundabout’s tight circle forces approaching traffic to slow down and yield to circulating traffic, and then move smoothly around the central island. As a result, roundabouts have fewer stop-and-go issues[23], which reduces fuel consumption and vehicle emissions and allows drivers to perform U-turns more easily. Since traffic flows continuously at lower speeds in a roundabout, this continuous flow minimizes the need for vehicles to stop, which reduces congestion.

The Federal Highway Administration estimates that when a roundabout replaces a stop sign-controlled intersection, it reduces serious and fatal injury crashes by 90%[24], and when it replaces an intersection with a traffic light, it reduces serious and fatal injury crashes by nearly 80%[25].

Why do some places have more than others?

Engineers and planners traditionally have installed roundabouts in intersections with severe congestion or a history of accidents[26]. But, with public support and funding, they can get installed anywhere.

For some traffic engineers, the sky’s the limit.

But roundabouts aren’t needed in every intersection. In places where congestion isn’t an issue, city planners tend not to push for them[27]. For example, while there are [around 750 roundabouts] in Florida, there are fewer than 50 in North Dakota[28], South Dakota[29] and Wyoming[30] combined.

Roundabouts have been gaining popularity[31] in the U.S. in recent years, in part because the Federal Highway Administration recommends them[32] as the safest option. Some states, like New York and Virginia, have adopted a “roundabout first” policy, where engineers default to using roundabouts where feasible when building or upgrading intersections.

In 2000, the U.S. only had 356 roundabouts[33]. Over the past two decades, that number has grown to over 10,000[34]. Love them or hate them, the roundabout’s widespread adoption suggests that these circular intersections are here to stay.

Read more

Curious Kids[1] is a series for children of all ages. If you have a question you’d like an expert to answer, send it to This email address is being protected from spambots. You need JavaScript enabled to view it.[2]. Is it possible for there to be ghosts? – Madelyn, age 11, Fort Lupton, Colorado Certainly, lots of people believe in ghosts – a spirit left behind after someone who was alive has died.In a 2021 poll of 1,000 American adults[3], 41% said they believe in ghosts, and 20% said they had personally experienced them. If they’re right, that’s more than 50 million spirit encounters in the U.S. alone.That includes the owner of a retail shop near my home who believes his place is haunted. When I asked what most convinced him of this, he sent me dozens of eerie security camera video clips. He also brought in ghost hunters who reinforced his suspicions. Some of the videos show small orbs of light gliding around the room. In others, you can hear faint voices and loud bumping sounds when nobody’s there. Others show a book flying off a desk[4] and products jumping off a shelf.
Many ghostly encounters are due to the way your brain interprets certain sights and sounds.
It’s not uncommon for me to hear stories like this. As a sociologist[5], some of my work looks at beliefs in things like ghosts[6], aliens[7], pyramid power[8] and superstitions[9]. Along with others who practice scientific skepticism, I keep an open mind while maintaining that extraordinary claims require extraordinary evidence. Tell me you had a burger for lunch, and I’ll take your word for it. Tell me you shared your fries with Abraham Lincoln’s ghost, and I’ll want more evidence.In the “spirit” of critical thinking, consider the following three questions:Are ghosts possible?People may think they’re experiencing ghosts when they hear strange voices, see moving objects, witness balls or wisps of light or even translucent people. Yet no one describes ghosts as aging, eating, breathing[10] or using bathrooms – despite plumbers receiving many calls about toilets “ghost-flushing[11].” So could ghosts be made of a special kind of energy[12] that hovers and flies without dissipating? If that’s the case, that means when ghosts glow, move objects and make sounds, they are acting like matter – something that takes up space and has mass, like wood, water, plants and people. Conversely, when passing through walls or vanishing, they must not act like matter. But centuries of physics research[13] have found nothing like this exists, which is why physicists say ghosts can’t exist[14]. And so far, there is no proof that any part of a person can continue on after death.
The real truth is out there, says this ghost skeptic.
What’s the evidence?Never before in history have people recorded so many ghost encounters, thanks in part to mobile phone cameras and microphones. It seems there would be great evidence by now. But scientists don’t have it[15]. Instead, there are lots of ambiguous recordings sabotaged by bad lighting and faulty equipment. But popular television shows on ghost hunting[16] convince many viewers that blurry images and emotional reactions are proof enough. As for all the devices[17] ghost hunters use to capture sounds, electrical fields and infrared radiation – they may look scientific, but they’re not[18]. Measurements are worthless without some knowledge of the thing you’re measuring.When ghost hunters descend on an allegedly haunted location for a night of meandering and measurement, they usually find something they later deem paranormal. It may be a moving door (breeze?), a chill (gap in the floorboards?), a glow (light entering from outside?), electrical fluctuations (old wiring?), or bumps and faint voices (crew in other rooms?). Whatever happens, ghost hunters will draw a bull’s-eye around it, interpret that as “evidence” and investigate no further[19].
There’s a scientific explanation for spooky sightings.
Are there alternative explanations?Personal experiences with ghosts can be misleading due to the limitations of human senses. That’s why anecdotes can’t substitute for objective research. Alleged hauntings usually have plenty of non-ghostly explanations.One example is that retail establishment in my neighborhood. I reviewed the security camera clips and gathered information about the store’s location and layout, and the exact equipment used in the recordings. First, the “orbs”: Videos captured many small globes of light seemingly moving around the room. In reality, the orbs are tiny particles of dust[20] wafting close to the camera lens, made to “bloom” by the camera’s infrared lights. That they appear to float around the room is an optical illusion. Watch any orb video closely and you’ll see they never go behind objects in the room. That’s exactly what you’d expect with dust particles close to the camera lens.Next, voices and bumps: The shop is in a busy corner mini-mall. Three walls abut sidewalks, loading zones and parking areas; an adjacent store shares the fourth. The security camera mics probably recorded sounds from outdoors, other rooms and the adjacent unit. The owner never checked for these possibilities.Then, the flying objects: The video shows objects falling off the showroom wall. The shelf rests on adjustable brackets, one of which wasn’t fully seated in its slot. The weight of the shelf caused the bracket to settle into place with a visible jerk. This movement sent some items tumbling off the shelf.Then, the flying book: I used a simple trick to recreate the event[21] at home: a hidden string taped inside a book’s cover, wrapped around the kitchen island, and tugged by my right hand out of camera range.
Experience the mystery of the flying book.
Now I can’t prove there wasn’t a ghost in the original video. The point is to provide a more plausible explanation than “it must have been a ghost.” One final consideration: Virtually all ghostly experiences involve impediments to making accurate perceptions and judgments – bad lighting[22], emotional arousal[23], sleep phenomena[24], social influences[25], culture[26], a misunderstanding of how recording devices work[27], and the prior beliefs[28] and personality traits[29] of those who claim to see ghosts. All of these hold the potential to induce unforgettable ghostly encounters.But all can be explained without ghosts being real.Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to This email address is being protected from spambots. You need JavaScript enabled to view it.[30]. Please tell us your name, age and the city where you live.And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

Read more

In the 2018 documentary “The Cleaners[1],” a young man in Manila, Philippines, explains his work as a content moderator: “We see the pictures on the screen. You then go through the pictures and delete those that don’t meet the guidelines. The daily quota of pictures is 25,000.” As he speaks, his mouse clicks, deleting offending images while allowing others to remain online.

The man in Manila is one of thousands of content moderators hired as contractors by social media platforms – 10,000 at Google alone[2]. Content moderation on an industrial scale like this is part of the everyday experience for users of social media. Occasionally a post someone makes is removed, or a post someone thinks is offensive is allowed to go viral.

Similarly, platforms add and remove features without input from the people who are most affected by those decisions. Whether you are outraged or unperturbed, most people don’t think much about the history of a system in which people in conference rooms in Silicon Valley and Manila determine your experiences online.

But why should a few companies – or a few billionaire owners – have the power to decide everything about online spaces that billions of people use? This unaccountable model of governance has led stakeholders of all stripes to criticize platforms’ decisions as arbitrary[3], corrupt[4] or irresponsible[5]. In the early, pre-web days of the social internet, decisions about the spaces people gathered in online were often made by members of the community. Our examination of the early history of online governance[6] suggests that social media platforms could return – at least in part – to models of community governance in order to address their crisis of legitimacy.

The documentary ‘The Cleaners’ shows some of the hidden costs of Big Tech’s customer service approach to content moderation.

Online governance – a history

In many early online spaces, governance was handled by community members, not by professionals. One early online space, LambdaMOO[7], invited users to build their own governance system, which devolved power from the hands of those who technically controlled the space – administrators known as “wizards” – to members of the community. This was accomplished via a formal petitioning process and a set of appointed mediators[8] who resolved conflicts between users.

Other spaces had more informal processes for incorporating community input. For example, on bulletin board systems, users voted with their wallets[9], removing critical financial support if they disagreed with the decisions made by the system’s administrators. Other spaces, like text-based Usenet newsgroups, gave users substantial power to shape their experiences. The newsgroups left obvious spam in place, but gave users tools to block it if they chose to. Usenet’s administrators argued that it was fairer to allow each user to make decisions that reflected their individual preferences[10] rather than taking a one-size-fits-all approach.

The graphical web expanded use of the internet from a few million users to hundreds of millions within a decade[11] from 1995 to 2005. During this rapid expansion, community governance was replaced with governance models inspired by customer service, which focused on scale and cost.

This switch from community governance to customer service made sense to the fast-growing companies that made up the late 1990s internet boom. Promising their investors that they could grow rapidly and make changes quickly, companies looked for approaches to the complex work of governing online spaces that centralized power and increased efficiency[12].

While this customer service model of governance allowed early user-generated content sites like Craigslist and GeoCities to grow rapidly[13], it set the stage for the crisis of legitimacy facing social media platforms today. Contemporary battles over social media are rooted in the sense that the people and processes governing online spaces are unaccountable to the communities that gather in them.

Paths to community control

Implementing community governance in today’s platforms could take a number of different forms, some of which are already being experimented with.

Advisory boards like Meta’s Oversight Board[14] are one way to involve outside stakeholders in platform governance, providing independent — albeit limited — review of platform decisions. X (formerly Twitter) is taking a more democratic approach with its Community Notes[15] initiative, which allows users to contextualize information on the platform by crowdsourcing notes and ratings.

Some may question whether community governance can be implemented successfully in platforms that serve billions of users. In response, we point to Wikipedia. It is entirely community-governed and has created an open encyclopedia that’s become the foremost information resource in many languages. Wikipedia is surprisingly resilient to vandalism and abuse, with robust procedures that ensure a resource used by billions remains accessible, accurate and reasonably civil.

On a smaller scale, total self-governance – echoing early online spaces – could be key for communities that serve specific subsets of users. For example, Archive of Our Own[16] was created after fan-fiction authors – people who write original stories using characters and worlds from published books, television shows and movies – found existing platforms unwelcoming. For example, many fan-fiction authors were kicked off social media platforms[17] due to overzealous copyright enforcement or concerns about sexual content.

Fed up with platforms that didn’t understand their work or their culture, a group of authors designed and built their own platform specifically to meet the needs of their community. AO3, as it is colloquially known, serves millions of people a month, includes tools specific to the needs of fan-fiction authors, and is governed by the same people it serves.

text above and below a photo of two people in lab coats standing in a hallway
X, formerly Twitter, allows people to use Community Notes to append relevant information to posts that contain inaccuracies. Screen capture by The Conversation U.S.[18], CC BY-ND[19]

Hybrid models, like on Reddit, mix centralized and self-governance[20]. Reddit hosts a collection of interest-based communities called subreddits that have their own rules, norms and teams of moderators. Underlying a subreddit’s governance structure is a set of rules, processes and features that apply to everyone. Not every subreddit is a sterling example of a healthy online community, but more are than are not.

There are also technical approaches to community governance. One approach would enable users to choose the algorithms that curate their social media feeds. Imagine that instead of only being able to use Facebook’s algorithm, you could choose from a suite of algorithms provided by third parties – for example, from The New York Times or Fox News.

More radically decentralized platforms like Mastodon devolve control to a network of servers that are similar in structure to email. This makes it easier to choose an experience that matches your preferences. You can choose which Mastodon server to use, and can switch easily – just like you can choose whether to use Gmail or Outlook for email – and can change your mind, all while maintaining access to the wider email network.

Additionally, advancements in generative AI – which shows early promise in producing computer code[21] – could make it easier for people, even those without a technical background, to build custom online spaces when they find existing spaces unsuitable. This would relieve pressure on online spaces to be everything for everyone and support a sense of agency in the digital public sphere.

There are also more indirect ways to support community governance. Increasing transparency – for example, by providing access to data about the impact of platforms’ decisions – can help researchers, policymakers and the public hold online platforms accountable. Further, encouraging ethical professional norms among engineers and product designers can make online spaces more respectful of the communities they serve.

Going forward by going back

Between now and the end of 2024, national elections are scheduled in many countries, including Argentina, Australia, India, Indonesia, Mexico, South Africa, Taiwan, the U.K. and the U.S. This is all but certain to lead to conflicts over online spaces.

We believe it is time to consider not just how online spaces can be governed efficiently and in service to corporate bottom lines, but how they can be governed fairly and legitimately. Giving communities more control over the spaces they participate in is a proven way to do just that.

Read more

After a journey of seven years and nearly 4 billion miles, NASA’s OSIRIS-REx[1] spacecraft landed[2] gently in the Utah desert on the morning of Sept. 24, 2023, with a precious payload. The spacecraft[3] brought back a sample from the asteroid Bennu.

An artist's illustration of a gray metallic spacecraft hovering above the dark surface of an asteroid, with an arm that reaches down to the surface.
OSIRIS-REx collected a sample from the asteroid Bennu. NASA/Goddard Space Flight Center via AP[4]

Roughly half a pound of material collected from the 85 million-ton asteroid[5] (77.6 billion kg) will help scientists learn about the formation of the solar system[6], including whether asteroids like Bennu[7] include the chemical ingredients for life.

NASA’s mission was budgeted at US$800 million[8] and will end up costing around $1.16 billion[9] for just under 9 ounces of sample[10] (255 g). But is this the most expensive material known? Not even close.

I’m a professor of astronomy[11]. I use Moon and Mars rocks in my teaching and have a modest collection of meteorites. I marvel at the fact that I can hold in my hand something that is billions of years old from billions of miles away.

The cost of sample return

A handful of asteroid works out to $132 million per ounce[12], or $4.7 million per gram. That’s about 70,000 times the price of gold[13], which has been in the range of $1,800 to $2,000 per ounce ($60 to $70 per gram) for the past few years.

The first extraterrestrial material returned to Earth came from the Apollo program. Between 1969 and 1972, six Apollo missions brought back 842 pounds (382 kg) of lunar samples[14].

The total price tag[15] for the Apollo program, adjusted for inflation, was $257 billion. These Moon rocks were a relative bargain at $19 million per ounce ($674 thousand per gram), and of course Apollo had additional value in demonstrating technologies for human spaceflight.

NASA is planning to bring samples back from Mars in the early 2030s to see if any contain traces of ancient life. The Mars Sample Return[16] mission aims to return 30 sample tubes[17] with a total weight of a pound[18] (450 g). The Perseverance rover[19] has already cached 10 of these samples[20].

However, costs have grown[21] because the mission is complex, involving multiple robots and spacecraft. Bringing back the samples could run $11 billion, putting their cost at $690 million per ounce ($24 million per gram), five times the unit cost of the Bennu samples.

Some space rocks are free

Some space rocks cost nothing. Almost 50 tons of free samples from the solar system rain down on the Earth[22] every day. Most burn up in the atmosphere, but if they reach the ground they’re called meteorites[23], and most of those come from asteroids.

Meteorites can get costly[24] because it can be difficult to recognize and retrieve them. Rocks all look similar unless you’re a geology expert.

Most meteorites are stony, called chondrites[25], and they can be bought online for as little as $15 per ounce (50 cents per gram). Chondrites differ from normal rocks in containing round grains called chondrules[26] that formed as molten droplets in space at the birth of the solar system 4.5 billion years ago.

A meteorite that looks like a long gray rock with dark gray veins running across it.
A chondrite from the Viñales meteorite, which originated from the asteroid belt between Mars and Jupiter. Ser Amantio di Nicolao/Wikimedia Commons[27], CC BY-SA[28]

Iron meteorites[29] are distinguished by a dark crust, caused by melting of the surface as they come through the atmosphere, and an internal pattern of long metallic crystals. They cost $50 per ounce ($1.77 per gram) or even higher. Pallasites[30] are stony-iron meteorites laced with the mineral olivine. When cut and polished, they have a translucent yellow-green color and can cost over $1,000 per ounce ($35 per gram).

A brown-gray meteorite that's roughly circular with textured ridges
An iron meteorite. Llez/Wikimedia Commons[31], CC BY-SA[32]

More than a few meteorites have reached us from the Moon and Mars. Close to 600 have been recognized as coming from the Moon[33], and the largest[34], weighing 4 pounds (1.8 kg), sold for a price that works out to be about $4,700 per ounce ($166 per gram).

About 175 meteorites are identified as having come from Mars[35]. Buying one[36] would cost about $11,000 per ounce ($388 per gram).

Researchers can figure out where meteorites come from[37] by using their landing trajectories to project their paths back to the asteroid belt or comparing their composition with different classes of asteroids. Experts can tell where Moon and Mars rocks come from by their geology and mineralogy.

The limitation of these “free” samples is that there is no way to know where on the Moon or Mars they came from, which limits their scientific usefulness. Also, they start to get contaminated as soon as they land on Earth, so it’s hard to tell if any microbes within them are extraterrestrial.

Expensive elements and minerals

Some elements and minerals are expensive because they’re scarce. Simple elements in the periodic table[38] have low prices. Per ounce, carbon costs one-third of a cent, iron costs 1 cent, aluminum costs 56 cents, and even mercury is less than a dollar (per 100 grams, carbon costs $2.40, iron costs less than a cent and alumnium costs 19 cents). Silver is $14 per ounce (50 cents per gram), and gold, $1,900 per ounce ($67 per gram).

Seven radioactive elements[39] are extremely rare in nature and so difficult to create in the lab that they eclipse the price of NASA’s Mars Sample Return. polonium-209, the most expensive of these, costs $1.4 trillion per ounce ($49 billion per gram).

Gemstones can be expensive, too. High-quality emeralds[40] are 10 times the price of gold[41], and white diamonds[42] are 100 times the price of gold.

A circular white diamond sitting on a white surface.
High-quality white diamonds can cost millions of dollars. AP Photo/Mary Altaffer[43]

Some diamonds have a boron impurity that gives them a vivid blue hue[44]. They’re found in only a handful of mines worldwide, and at $550 million per ounce[45] ($19 million per gram) they rival the cost of the upcoming Mars samples – an ounce is 142 carats, but very few gems are that large.

The most expensive synthetic material[46] is a tiny spherical “cage” of carbon with a nitrogen atom trapped inside. The atom inside the cage is extremely stable, so can be used for timekeeping. Endohedral fullerenes[47] are made of carbon material that may be used to create extremely accurate atomic clocks. They can cost $4 billion per ounce ($141 million per gram).

Most expensive of all

Antimatter[48] occurs in nature, but it’s exceptionally rare because any time an antiparticle is created it quickly annihilates with a particle and produces radiation.

At CERN’s ‘antimatter factory,’ scientists create antimatter in very small quantities.

The particle accelerator at CERN[49] can produces 10 million antiprotons per minute. That sounds like a lot, but at that rate[50] it would take billions of years and cost a billion billion (1018) dollars to generate an ounce (3.5 x 1016 dollars per gram).

Warp drives[51] as envisaged by “Star Trek,” which are powered by matter-antimatter annihilation, will have to wait.

Read more

Little Deming Lake doesn’t get much notice from visitors to Itasca State Park[1] in Minnesota. There’s better boating on nearby Lake Itasca, the headwaters of the Mississippi River. My colleagues and I need to maneuver hundreds of pounds of equipment down a hidden path made narrow by late-summer poison ivy to launch our rowboats.

But modest Deming Lake offers more than meets the eye for me, a geochemist[2] interested in how oxygen built up in the atmosphere 2.4 billion years ago. The absence of oxygen in the deep layers of Deming Lake is something this small body of water has in common with early Earth’s oceans.

On each of our several expeditions here each year, we row our boats out into the deepest part of the lake – over 60 feet (18 meters), despite the lake’s surface area being only 13 acres. We drop an anchor and connect our boats in a flotilla, readying ourselves for the work ahead.

Smooth lake with boats in the distance against woodsy shoreline
Researchers’ boats on Deming Lake. Elizabeth Swanner, CC BY-ND[3]

Deming Lake is meromictic[4], a term from Greek that means only partially mixing. In most lakes, at least once a year, the water at the top sinks while the water at the bottom rises because of wind and seasonal temperature changes that affect water’s density. But the deepest waters of Deming Lake never reach the surface[5]. This prevents oxygen in its top layer of water from ever mixing into its deep layer.

Less than 1% of lakes are meromictic, and most that are have dense, salty bottom waters. Deming Lake’s deep waters are not very salty, but of the salts in its bottom waters, iron is one of the most abundant[6]. This makes Deming Lake one of the rarest types of meromictic lakes[7].

man seated in small boat wearing gloves injecting water into a collection tube
Postdoc researcher Sajjad Akam collects a water sample for chemical analysis back in the lab. Elizabeth Swanner, CC BY-ND[8]

The lake surface is calm, and the still air is glorious on this cool, cloudless August morning. We lower a 2-foot-long water pump zip-tied to a cable attached to four sensors. The sensors measure the temperature, amount of oxygen, pH and amount of chlorophyll in the water at each layer we encounter. We pump water from the most intriguing layers up to the boat and fill a myriad of bottles and tubes, each destined for a different chemical or biological analysis.

My colleagues and I have homed in on Deming Lake to explore questions about how microbial life adapted to and changed the environmental conditions on early Earth. Our planet was inhabited only by microbes[9] for most of its history. The atmosphere and the oceans’ depths didn’t have much oxygen, but they did have a lot of iron, just like Deming Lake does. By investigating what Deming Lake’s microbes are doing, we can better understand how billions of years ago they helped to transform the Earth’s atmosphere and oceans into what they’re like now.

Layer by layer, into the lake

Two and a half billion years ago, ocean waters had enough iron to form today’s globally distributed rusty iron deposits called[10] banded iron formations[11] that supply iron for the modern global steel industry. Nowadays, oceans have only trace amounts of iron[12] but abundant oxygen. In most waters, iron and oxygen are antithetical. Rapid chemical and biological reactions between iron and oxygen[13] mean you can’t have much of one while the other is present.

The rise of oxygen in the early atmosphere and ocean was due to cyanobacteria[14]. These single-celled organisms emerged at least 2.5 billion years ago[15]. But it took roughly 2 billion years for the oxygen they produce via photosynthesis to build up to levels that allowed for the first animals[16] to appear on Earth.

water concentrated on a filter looks pale green
Chlorophyll colors water from the lake slightly green. Elizabeth Swanner, CC BY-ND[17]

At Deming Lake, my colleagues and I pay special attention to the water layer where the chlorophyll readings jump. Chlorophyll is the pigment[18] that makes plants green. It harnesses sunlight energy to turn water and carbon dioxide into oxygen and sugars. Nearly 20 feet (6 meters) below Deming’s surface, the chlorophyll is in cyanobacteria and photosynthetic algae, not plants.

But the curious thing about this layer is that we don’t detect oxygen, despite the abundance of these oxygen-producing organisms. This is the depth where iron concentrations start to climb to the high levels present at the lake’s bottom.

This high-chlorophyll, high-iron and low-oxygen layer is of special interest to us because it might help us understand where cyanobacteria lived in the ancient ocean, how well they were growing and how much oxygen they produced.

lined notebook with handwritten numerical notation for 8/5/23
The researchers record the data coming off their sensors in waterproof field notebooks. Elizabeth Swanner

We suspect the reason cyanobacteria gather at this depth in Deming Lake is that there is more iron there than at the top of the lake. Just like humans need iron for red blood cells[19], cyanobacteria need lots of iron to help catalyze the reactions of photosynthesis.

A likely reason we can’t measure any oxygen in this layer is that in addition to cyanobacteria, there are a lot of other bacteria here. After a good long life of a few days, the cyanobacteria die, and the other bacteria feed on their remains. These bacteria rapidly use up any oxygen produced by still photosynthesizing cyanobacteria the way a fire does as it burns through wood.

We know there are lots of bacteria here based on how cloudy the water is, and we see them when we inspect a drop of this water under a microscope. But we need another way to measure photosynthesis besides measuring oxygen levels.

Long-running lakeside laboratory

The other important function of photosynthesis is converting carbon dioxide into sugars, which eventually are used to make more cells. We need a way to track whether new sugars are being made, and if they are, whether it’s by photosynthetic cyanobacteria. So we fill glass bottles with samples of water from this lake layer and seal them tight with rubber stoppers.

We drive the 3 miles back to the Itasca Biological Station and Laboratories[20] where we will set up our experiments. The station opened in 1909 and is home base for us this week, providing comfy cabins, warm meals and this laboratory space.

In the lab, we inject our glass bottle with carbon dioxide that carries an isotopic tracer[21]. If cyanobacteria grow, their cells will incorporate this isotopic marker.

We had a little help to formulate our questions and experiments. University of Minnesota students attending summer field courses collected decades worth of data in Itasca State Park. A diligent university librarian digitized thousands of those students’ final papers[22].

My students and I pored over the papers concerning Deming Lake, many of which tried to determine whether the cyanobacteria in the chlorophyll-rich layer are doing photosynthesis. While most indicated yes, those students were measuring only oxygen and got ambiguous results. Our use of the isotopic tracer is trickier to implement but will give clearer results.

woman holds a clear plastic bag aloft, she and man are seated in boat
Graduate students Michelle Chamberlain and Zackry Stevenson about to sink the bottles for incubation in Deming Lake. Elizabeth Swanner, CC BY-ND[23]

That afternoon, we’re back on the lake. We toss an anchor; attached to its rope is a clear plastic bag holding the sealed bottles of lake water now amended with the isotopic tracer. They’ll spend the night in the chlorophyll-rich layer, and we’ll retrieve them after 24 hours. Any longer than that and the isotopic label might end up in the bacteria that eat the dying cyanobacteria instead of the cyanobacteria themselves. We tie off the rope to a floating buoy and head back to the station’s dining hall for our evening meal.

Iron, chlorophyll, oxygen

The next morning, as we wait for the bottles to finish their incubation, we collect water from the different layers of the lake and add some chemicals that kill the cells but preserve their bodies. We’ll look at these samples under the microscope to figure out how many cyanobacteria are in the water, and we’ll measure how much iron is inside the cyanobacteria.

That’s easier said than done, because we have to first separate all the “needles” (cyanobacteria) from the “hay” (other cells) and then clean any iron off the outside of the cyanobacteria. Back at Iowa State University, we’ll shoot the individual cells one by one into a flame that incinerates them, which liberates all the iron they contain so we can measure it.

rowboat with one woman in it on a lake with woodsy shoreline
Biogeochemist Katy Sparrow rows a research vessel to shore. Elizabeth Swanner, CC BY-ND[24]

Our scientific hunch, or hypothesis[25], is that the cyanobacteria that live in the chlorophyll- and iron-rich layer will contain more iron than cyanobacteria that live in the top lake layer. If they do, it will help us establish that greater access to iron is a motive for living in that deeper and dimmer layer.

These experiments won’t tell the whole story of why it took so long for Earth to build up oxygen, but they will help us to understand a piece of it – where oxygen might have been produced and why, and what happened to oxygen in that environment.

Deming Lake is quickly becoming its own attraction for those with a curiosity about what goes on beneath its tranquil surface – and what that might be able to tell us about how new forms of life took hold long ago on Earth.

Read more

More Articles …