Read more https://theconversation.com/are-ghosts-real-a-social-psychologist-examines-the-evidence-210048
Throwback to early internet days could fix social media's crisis of legitimacy
In the 2018 documentary “The Cleaners[1],” a young man in Manila, Philippines, explains his work as a content moderator: “We see the pictures on the screen. You then go through the pictures and delete those that don’t meet the guidelines. The daily quota of pictures is 25,000.” As he speaks, his mouse clicks, deleting offending images while allowing others to remain online.
The man in Manila is one of thousands of content moderators hired as contractors by social media platforms – 10,000 at Google alone[2]. Content moderation on an industrial scale like this is part of the everyday experience for users of social media. Occasionally a post someone makes is removed, or a post someone thinks is offensive is allowed to go viral.
Similarly, platforms add and remove features without input from the people who are most affected by those decisions. Whether you are outraged or unperturbed, most people don’t think much about the history of a system in which people in conference rooms in Silicon Valley and Manila determine your experiences online.
But why should a few companies – or a few billionaire owners – have the power to decide everything about online spaces that billions of people use? This unaccountable model of governance has led stakeholders of all stripes to criticize platforms’ decisions as arbitrary[3], corrupt[4] or irresponsible[5]. In the early, pre-web days of the social internet, decisions about the spaces people gathered in online were often made by members of the community. Our examination of the early history of online governance[6] suggests that social media platforms could return – at least in part – to models of community governance in order to address their crisis of legitimacy.
Online governance – a history
In many early online spaces, governance was handled by community members, not by professionals. One early online space, LambdaMOO[7], invited users to build their own governance system, which devolved power from the hands of those who technically controlled the space – administrators known as “wizards” – to members of the community. This was accomplished via a formal petitioning process and a set of appointed mediators[8] who resolved conflicts between users.
Other spaces had more informal processes for incorporating community input. For example, on bulletin board systems, users voted with their wallets[9], removing critical financial support if they disagreed with the decisions made by the system’s administrators. Other spaces, like text-based Usenet newsgroups, gave users substantial power to shape their experiences. The newsgroups left obvious spam in place, but gave users tools to block it if they chose to. Usenet’s administrators argued that it was fairer to allow each user to make decisions that reflected their individual preferences[10] rather than taking a one-size-fits-all approach.
The graphical web expanded use of the internet from a few million users to hundreds of millions within a decade[11] from 1995 to 2005. During this rapid expansion, community governance was replaced with governance models inspired by customer service, which focused on scale and cost.
This switch from community governance to customer service made sense to the fast-growing companies that made up the late 1990s internet boom. Promising their investors that they could grow rapidly and make changes quickly, companies looked for approaches to the complex work of governing online spaces that centralized power and increased efficiency[12].
While this customer service model of governance allowed early user-generated content sites like Craigslist and GeoCities to grow rapidly[13], it set the stage for the crisis of legitimacy facing social media platforms today. Contemporary battles over social media are rooted in the sense that the people and processes governing online spaces are unaccountable to the communities that gather in them.
Paths to community control
Implementing community governance in today’s platforms could take a number of different forms, some of which are already being experimented with.
Advisory boards like Meta’s Oversight Board[14] are one way to involve outside stakeholders in platform governance, providing independent — albeit limited — review of platform decisions. X (formerly Twitter) is taking a more democratic approach with its Community Notes[15] initiative, which allows users to contextualize information on the platform by crowdsourcing notes and ratings.
Some may question whether community governance can be implemented successfully in platforms that serve billions of users. In response, we point to Wikipedia. It is entirely community-governed and has created an open encyclopedia that’s become the foremost information resource in many languages. Wikipedia is surprisingly resilient to vandalism and abuse, with robust procedures that ensure a resource used by billions remains accessible, accurate and reasonably civil.
On a smaller scale, total self-governance – echoing early online spaces – could be key for communities that serve specific subsets of users. For example, Archive of Our Own[16] was created after fan-fiction authors – people who write original stories using characters and worlds from published books, television shows and movies – found existing platforms unwelcoming. For example, many fan-fiction authors were kicked off social media platforms[17] due to overzealous copyright enforcement or concerns about sexual content.
Fed up with platforms that didn’t understand their work or their culture, a group of authors designed and built their own platform specifically to meet the needs of their community. AO3, as it is colloquially known, serves millions of people a month, includes tools specific to the needs of fan-fiction authors, and is governed by the same people it serves.
Hybrid models, like on Reddit, mix centralized and self-governance[20]. Reddit hosts a collection of interest-based communities called subreddits that have their own rules, norms and teams of moderators. Underlying a subreddit’s governance structure is a set of rules, processes and features that apply to everyone. Not every subreddit is a sterling example of a healthy online community, but more are than are not.
There are also technical approaches to community governance. One approach would enable users to choose the algorithms that curate their social media feeds. Imagine that instead of only being able to use Facebook’s algorithm, you could choose from a suite of algorithms provided by third parties – for example, from The New York Times or Fox News.
More radically decentralized platforms like Mastodon devolve control to a network of servers that are similar in structure to email. This makes it easier to choose an experience that matches your preferences. You can choose which Mastodon server to use, and can switch easily – just like you can choose whether to use Gmail or Outlook for email – and can change your mind, all while maintaining access to the wider email network.
Additionally, advancements in generative AI – which shows early promise in producing computer code[21] – could make it easier for people, even those without a technical background, to build custom online spaces when they find existing spaces unsuitable. This would relieve pressure on online spaces to be everything for everyone and support a sense of agency in the digital public sphere.
There are also more indirect ways to support community governance. Increasing transparency – for example, by providing access to data about the impact of platforms’ decisions – can help researchers, policymakers and the public hold online platforms accountable. Further, encouraging ethical professional norms among engineers and product designers can make online spaces more respectful of the communities they serve.
Going forward by going back
Between now and the end of 2024, national elections are scheduled in many countries, including Argentina, Australia, India, Indonesia, Mexico, South Africa, Taiwan, the U.K. and the U.S. This is all but certain to lead to conflicts over online spaces.
We believe it is time to consider not just how online spaces can be governed efficiently and in service to corporate bottom lines, but how they can be governed fairly and legitimately. Giving communities more control over the spaces they participate in is a proven way to do just that.
Space rocks and asteroid dust are pricey, but these aren't the most expensive materials used in science
After a journey of seven years and nearly 4 billion miles, NASA’s OSIRIS-REx[1] spacecraft landed[2] gently in the Utah desert on the morning of Sept. 24, 2023, with a precious payload. The spacecraft[3] brought back a sample from the asteroid Bennu.
Roughly half a pound of material collected from the 85 million-ton asteroid[5] (77.6 billion kg) will help scientists learn about the formation of the solar system[6], including whether asteroids like Bennu[7] include the chemical ingredients for life.
NASA’s mission was budgeted at US$800 million[8] and will end up costing around $1.16 billion[9] for just under 9 ounces of sample[10] (255 g). But is this the most expensive material known? Not even close.
I’m a professor of astronomy[11]. I use Moon and Mars rocks in my teaching and have a modest collection of meteorites. I marvel at the fact that I can hold in my hand something that is billions of years old from billions of miles away.
The cost of sample return
A handful of asteroid works out to $132 million per ounce[12], or $4.7 million per gram. That’s about 70,000 times the price of gold[13], which has been in the range of $1,800 to $2,000 per ounce ($60 to $70 per gram) for the past few years.
The first extraterrestrial material returned to Earth came from the Apollo program. Between 1969 and 1972, six Apollo missions brought back 842 pounds (382 kg) of lunar samples[14].
The total price tag[15] for the Apollo program, adjusted for inflation, was $257 billion. These Moon rocks were a relative bargain at $19 million per ounce ($674 thousand per gram), and of course Apollo had additional value in demonstrating technologies for human spaceflight.
NASA is planning to bring samples back from Mars in the early 2030s to see if any contain traces of ancient life. The Mars Sample Return[16] mission aims to return 30 sample tubes[17] with a total weight of a pound[18] (450 g). The Perseverance rover[19] has already cached 10 of these samples[20].
However, costs have grown[21] because the mission is complex, involving multiple robots and spacecraft. Bringing back the samples could run $11 billion, putting their cost at $690 million per ounce ($24 million per gram), five times the unit cost of the Bennu samples.
Some space rocks are free
Some space rocks cost nothing. Almost 50 tons of free samples from the solar system rain down on the Earth[22] every day. Most burn up in the atmosphere, but if they reach the ground they’re called meteorites[23], and most of those come from asteroids.
Meteorites can get costly[24] because it can be difficult to recognize and retrieve them. Rocks all look similar unless you’re a geology expert.
Most meteorites are stony, called chondrites[25], and they can be bought online for as little as $15 per ounce (50 cents per gram). Chondrites differ from normal rocks in containing round grains called chondrules[26] that formed as molten droplets in space at the birth of the solar system 4.5 billion years ago.
Iron meteorites[29] are distinguished by a dark crust, caused by melting of the surface as they come through the atmosphere, and an internal pattern of long metallic crystals. They cost $50 per ounce ($1.77 per gram) or even higher. Pallasites[30] are stony-iron meteorites laced with the mineral olivine. When cut and polished, they have a translucent yellow-green color and can cost over $1,000 per ounce ($35 per gram).
More than a few meteorites have reached us from the Moon and Mars. Close to 600 have been recognized as coming from the Moon[33], and the largest[34], weighing 4 pounds (1.8 kg), sold for a price that works out to be about $4,700 per ounce ($166 per gram).
About 175 meteorites are identified as having come from Mars[35]. Buying one[36] would cost about $11,000 per ounce ($388 per gram).
Researchers can figure out where meteorites come from[37] by using their landing trajectories to project their paths back to the asteroid belt or comparing their composition with different classes of asteroids. Experts can tell where Moon and Mars rocks come from by their geology and mineralogy.
The limitation of these “free” samples is that there is no way to know where on the Moon or Mars they came from, which limits their scientific usefulness. Also, they start to get contaminated as soon as they land on Earth, so it’s hard to tell if any microbes within them are extraterrestrial.
Expensive elements and minerals
Some elements and minerals are expensive because they’re scarce. Simple elements in the periodic table[38] have low prices. Per ounce, carbon costs one-third of a cent, iron costs 1 cent, aluminum costs 56 cents, and even mercury is less than a dollar (per 100 grams, carbon costs $2.40, iron costs less than a cent and alumnium costs 19 cents). Silver is $14 per ounce (50 cents per gram), and gold, $1,900 per ounce ($67 per gram).
Seven radioactive elements[39] are extremely rare in nature and so difficult to create in the lab that they eclipse the price of NASA’s Mars Sample Return. polonium-209, the most expensive of these, costs $1.4 trillion per ounce ($49 billion per gram).
Gemstones can be expensive, too. High-quality emeralds[40] are 10 times the price of gold[41], and white diamonds[42] are 100 times the price of gold.
Some diamonds have a boron impurity that gives them a vivid blue hue[44]. They’re found in only a handful of mines worldwide, and at $550 million per ounce[45] ($19 million per gram) they rival the cost of the upcoming Mars samples – an ounce is 142 carats, but very few gems are that large.
The most expensive synthetic material[46] is a tiny spherical “cage” of carbon with a nitrogen atom trapped inside. The atom inside the cage is extremely stable, so can be used for timekeeping. Endohedral fullerenes[47] are made of carbon material that may be used to create extremely accurate atomic clocks. They can cost $4 billion per ounce ($141 million per gram).
Most expensive of all
Antimatter[48] occurs in nature, but it’s exceptionally rare because any time an antiparticle is created it quickly annihilates with a particle and produces radiation.
The particle accelerator at CERN[49] can produces 10 million antiprotons per minute. That sounds like a lot, but at that rate[50] it would take billions of years and cost a billion billion (1018) dollars to generate an ounce (3.5 x 1016 dollars per gram).
Warp drives[51] as envisaged by “Star Trek,” which are powered by matter-antimatter annihilation, will have to wait.
A layered lake is a little like Earth’s early oceans − and lets researchers explore how oxygen built up in our atmosphere billions of years ago
Little Deming Lake doesn’t get much notice from visitors to Itasca State Park[1] in Minnesota. There’s better boating on nearby Lake Itasca, the headwaters of the Mississippi River. My colleagues and I need to maneuver hundreds of pounds of equipment down a hidden path made narrow by late-summer poison ivy to launch our rowboats.
But modest Deming Lake offers more than meets the eye for me, a geochemist[2] interested in how oxygen built up in the atmosphere 2.4 billion years ago. The absence of oxygen in the deep layers of Deming Lake is something this small body of water has in common with early Earth’s oceans.
On each of our several expeditions here each year, we row our boats out into the deepest part of the lake – over 60 feet (18 meters), despite the lake’s surface area being only 13 acres. We drop an anchor and connect our boats in a flotilla, readying ourselves for the work ahead.
Deming Lake is meromictic[4], a term from Greek that means only partially mixing. In most lakes, at least once a year, the water at the top sinks while the water at the bottom rises because of wind and seasonal temperature changes that affect water’s density. But the deepest waters of Deming Lake never reach the surface[5]. This prevents oxygen in its top layer of water from ever mixing into its deep layer.
Less than 1% of lakes are meromictic, and most that are have dense, salty bottom waters. Deming Lake’s deep waters are not very salty, but of the salts in its bottom waters, iron is one of the most abundant[6]. This makes Deming Lake one of the rarest types of meromictic lakes[7].
The lake surface is calm, and the still air is glorious on this cool, cloudless August morning. We lower a 2-foot-long water pump zip-tied to a cable attached to four sensors. The sensors measure the temperature, amount of oxygen, pH and amount of chlorophyll in the water at each layer we encounter. We pump water from the most intriguing layers up to the boat and fill a myriad of bottles and tubes, each destined for a different chemical or biological analysis.
My colleagues and I have homed in on Deming Lake to explore questions about how microbial life adapted to and changed the environmental conditions on early Earth. Our planet was inhabited only by microbes[9] for most of its history. The atmosphere and the oceans’ depths didn’t have much oxygen, but they did have a lot of iron, just like Deming Lake does. By investigating what Deming Lake’s microbes are doing, we can better understand how billions of years ago they helped to transform the Earth’s atmosphere and oceans into what they’re like now.
Layer by layer, into the lake
Two and a half billion years ago, ocean waters had enough iron to form today’s globally distributed rusty iron deposits called[10] banded iron formations[11] that supply iron for the modern global steel industry. Nowadays, oceans have only trace amounts of iron[12] but abundant oxygen. In most waters, iron and oxygen are antithetical. Rapid chemical and biological reactions between iron and oxygen[13] mean you can’t have much of one while the other is present.
The rise of oxygen in the early atmosphere and ocean was due to cyanobacteria[14]. These single-celled organisms emerged at least 2.5 billion years ago[15]. But it took roughly 2 billion years for the oxygen they produce via photosynthesis to build up to levels that allowed for the first animals[16] to appear on Earth.
At Deming Lake, my colleagues and I pay special attention to the water layer where the chlorophyll readings jump. Chlorophyll is the pigment[18] that makes plants green. It harnesses sunlight energy to turn water and carbon dioxide into oxygen and sugars. Nearly 20 feet (6 meters) below Deming’s surface, the chlorophyll is in cyanobacteria and photosynthetic algae, not plants.
But the curious thing about this layer is that we don’t detect oxygen, despite the abundance of these oxygen-producing organisms. This is the depth where iron concentrations start to climb to the high levels present at the lake’s bottom.
This high-chlorophyll, high-iron and low-oxygen layer is of special interest to us because it might help us understand where cyanobacteria lived in the ancient ocean, how well they were growing and how much oxygen they produced.
We suspect the reason cyanobacteria gather at this depth in Deming Lake is that there is more iron there than at the top of the lake. Just like humans need iron for red blood cells[19], cyanobacteria need lots of iron to help catalyze the reactions of photosynthesis.
A likely reason we can’t measure any oxygen in this layer is that in addition to cyanobacteria, there are a lot of other bacteria here. After a good long life of a few days, the cyanobacteria die, and the other bacteria feed on their remains. These bacteria rapidly use up any oxygen produced by still photosynthesizing cyanobacteria the way a fire does as it burns through wood.
We know there are lots of bacteria here based on how cloudy the water is, and we see them when we inspect a drop of this water under a microscope. But we need another way to measure photosynthesis besides measuring oxygen levels.
Long-running lakeside laboratory
The other important function of photosynthesis is converting carbon dioxide into sugars, which eventually are used to make more cells. We need a way to track whether new sugars are being made, and if they are, whether it’s by photosynthetic cyanobacteria. So we fill glass bottles with samples of water from this lake layer and seal them tight with rubber stoppers.
We drive the 3 miles back to the Itasca Biological Station and Laboratories[20] where we will set up our experiments. The station opened in 1909 and is home base for us this week, providing comfy cabins, warm meals and this laboratory space.
In the lab, we inject our glass bottle with carbon dioxide that carries an isotopic tracer[21]. If cyanobacteria grow, their cells will incorporate this isotopic marker.
We had a little help to formulate our questions and experiments. University of Minnesota students attending summer field courses collected decades worth of data in Itasca State Park. A diligent university librarian digitized thousands of those students’ final papers[22].
My students and I pored over the papers concerning Deming Lake, many of which tried to determine whether the cyanobacteria in the chlorophyll-rich layer are doing photosynthesis. While most indicated yes, those students were measuring only oxygen and got ambiguous results. Our use of the isotopic tracer is trickier to implement but will give clearer results.
That afternoon, we’re back on the lake. We toss an anchor; attached to its rope is a clear plastic bag holding the sealed bottles of lake water now amended with the isotopic tracer. They’ll spend the night in the chlorophyll-rich layer, and we’ll retrieve them after 24 hours. Any longer than that and the isotopic label might end up in the bacteria that eat the dying cyanobacteria instead of the cyanobacteria themselves. We tie off the rope to a floating buoy and head back to the station’s dining hall for our evening meal.
Iron, chlorophyll, oxygen
The next morning, as we wait for the bottles to finish their incubation, we collect water from the different layers of the lake and add some chemicals that kill the cells but preserve their bodies. We’ll look at these samples under the microscope to figure out how many cyanobacteria are in the water, and we’ll measure how much iron is inside the cyanobacteria.
That’s easier said than done, because we have to first separate all the “needles” (cyanobacteria) from the “hay” (other cells) and then clean any iron off the outside of the cyanobacteria. Back at Iowa State University, we’ll shoot the individual cells one by one into a flame that incinerates them, which liberates all the iron they contain so we can measure it.
Our scientific hunch, or hypothesis[25], is that the cyanobacteria that live in the chlorophyll- and iron-rich layer will contain more iron than cyanobacteria that live in the top lake layer. If they do, it will help us establish that greater access to iron is a motive for living in that deeper and dimmer layer.
These experiments won’t tell the whole story of why it took so long for Earth to build up oxygen, but they will help us to understand a piece of it – where oxygen might have been produced and why, and what happened to oxygen in that environment.
Deming Lake is quickly becoming its own attraction for those with a curiosity about what goes on beneath its tranquil surface – and what that might be able to tell us about how new forms of life took hold long ago on Earth.
Biological sex is far from binary − this college course examines the science of sex diversity in people, fungi and across the animal kingdom
How much time do kids spend on devices – playing games, watching videos, texting and using the phone?
Why Google, Bing and other search engines' embrace of generative AI threatens $68 billion SEO industry
Google, Microsoft and others boast that generative artificial intelligence tools like ChatGPT will make searching the internet[1] better than ever for users[2]. For example, rather than having to wade through a sea of URLs, users will be able to just get an answer combed from the entire internet.
There are also some concerns with the rise of AI-fueled search engines[3], such as the opacity over where information comes from, the potential for “hallucinated” answers and copyright issues.
But one other consequence is that I believe it may destroy the US$68 billion search engine optimization[4] industry that companies like Google helped create.
For the past 25 years or so, websites, news outlets, blogs and many others with a URL that wanted to get attention have used search engine optimization[5], or SEO, to “convince” search engines to share their content as high as possible in the results they provide to readers. This has helped drive traffic to their sites and has also spawned an industry of consultants and marketers who advise on how best to do that.
As an associate professor of information and operations management[6], I study the economics of e-commerce. I believe the growing use of generative AI will likely make all of that obsolete.
How online search works
Someone seeking information online opens her browser, goes to a search engine and types in the relevant keywords. The search engine displays the results, and the user browses through the links displayed in the result listings until she finds the relevant information.
To attract the user’s attentions, online content providers use various search engine marketing strategies, such as search engine optimization[7], paid placements[8] and banner displays[9].
For instance, a news website might hire a consultant to help it highlight key words in headlines and in metadata so that Google and Bing elevate its content when a user searches for the latest information on a flood or political crisis.
How generative AI changes search process
But this all depends on search engines luring tens of millions of users to their websites. And so to earn users’ loyalty and web traffic, search engines must continuously work on their algorithms to improve the quality of their search results.
That’s why, even if it could hurt a part of their revenue stream, search engines have been quick to experiment with generative AI[11] to improve search results. And this could fundamentally change the online search ecosystem.
All the biggest search engines have already adopted or are experimenting with this approach. Examples include Google’s Bard[12], Microsoft’s Bing AI[13], Baidu’s ERNIE[14] and DuckDuckGo’s DuckAssist[15].
Rather than getting a list of links, both organic and paid, based on whatever keywords or questions a user types in, generative AI will instead simply give you a text result[16] in the form of an answer. Say you’re planning a trip to Destin, Florida, and type the prompt “Create a three-day itinerary for a visitor” there. Instead of a bunch of links to Yelp and blog postings that require lots of clicking and reading, typing that into Bing AI will result in a detailed three-day itinerary.
Over time, as the quality of AI-generated answers improve, users will have less incentive to browse through search result listings. They can save time and effort by reading the AI-generated response to their query.
In other words, it would allow you to bypass all those paid links and costly efforts by websites to improve their SEO scores, rendering them useless.
When users start ignoring the sponsored and editorial result listings, this will have an adverse impact on the revenues of SEO consultants, search engine marketers[17] consultants and, ultimately, the bottom line of search engines themselves.
The financial impact
This financial impact cannot be ignored.
For example, the SEO industry generated $68.1 billion globally in 2022[18]. It had been expected to reach $129.6 billion by 2030, but these projections were made before the emergence of generative AI put the industry at risk of obsolescence.
As for search engines[19], monetizing online search services is a major source of their revenue[20]. They get a cut of the money that websites spend on improving their online visibility through paid placements, ads, affiliate marketing and the like, collectively known as search engine marketing. For example, approximately 58% of Google’s 2022 revenues[21] – or almost $162.5 billion – came from Google Ads, which provides some of these services.
Search engines run by massive companies with many revenue streams, like Google and Microsoft, will likely find ways to offset the losses by coming up with strategies to make money off generative AI answers. But the SEO marketers and consultants who depend on search engines – mostly small- and medium-sized companies[22] – will no longer be needed as they are today, and so the industry is unlikely to survive much longer.
A not-too-distant future
But don’t expect the SEO industry to fade away immediately. Generative AI search engines are still in their infancy and must address certain challenges before they’ll dominate search.
For one thing, most of these initiatives are still experimental[23] and often available only to certain users. And for another, generative AI has been notorious for providing incorrect[24], plagiarized[25] or simply made-up answers[26].
That means it’s unlikely at the moment to gain the trust or loyalty of many users.
Given these challenges, it is not surprising that generative AI has yet to transform online search[27]. However, given the resources available to researchers working on generative AI models, it is safe to assume that eventually these models will become better at their task, leading to the death of the SEO industry.