Chandrayaan-3's measurements of sulfur open the doors for lunar science and exploration
In an exciting milestone for lunar scientists around the globe[1], India’s Chandrayaan-3 lander[2] touched down 375 miles (600 km)[3] from the south pole of the Moon[4] on Aug. 23, 2023.
In just under 14 Earth days, Chandrayaan-3 provided scientists with valuable new data and further inspiration to explore the Moon[5]. And the Indian Space Research Organization[6] has shared these initial results[7] with the world.
While the data from Chandrayaan-3’s rover[8], named Pragyan, or “wisdom” in Sanskrit, showed the lunar soil[9] contains expected elements such as iron, titanium, aluminum and calcium, it also showed an unexpected surprise – sulfur[10].
Planetary scientists like me[11] have known that sulfur exists in lunar rocks and soils[12], but only at a very low concentration. These new measurements imply there may be a higher sulfur concentration than anticipated.
Pragyan has two instruments that analyze the elemental composition of the soil – an alpha particle X-ray spectrometer[13] and a laser-induced breakdown spectrometer[14], or LIBS[15] for short. Both of these instruments measured sulfur in the soil near the landing site.
Sulfur in soils near the Moon’s poles might help astronauts live off the land one day, making these measurements an example of science that enables exploration.
Geology of the Moon
There are two main rock types[16] on the Moon’s surface[17] – dark volcanic rock and the brighter highland rock. The brightness difference[18] between these two materials forms the familiar “man in the moon[19]” face or “rabbit picking rice” image to the naked eye.
Scientists measuring lunar rock and soil compositions in labs on Earth have found that materials from the dark volcanic plains tend to have more sulfur[22] than the brighter highlands material.
Sulfur mainly comes from[23] volcanic activity. Rocks deep in the Moon contain sulfur, and when these rocks melt, the sulfur becomes part of the magma. When the melted rock nears the surface, most of the sulfur in the magma becomes a gas that is released along with water vapor and carbon dioxide.
Some of the sulfur does stay in the magma and is retained within the rock after it cools. This process explains why sulfur is primarily associated with the Moon’s dark volcanic rocks.
Chandrayaan-3’s measurements of sulfur in soils are the first to occur on the Moon. The exact amount of sulfur cannot be determined until the data calibration is completed.
The uncalibrated data[24] collected by the LIBS instrument on Pragyan suggests that the Moon’s highland soils near the poles might have a higher sulfur concentration than highland soils from the equator and possibly even higher than the dark volcanic soils.
These initial results give planetary scientists like me[25] who study the Moon new insights into how it works as a geologic system. But we’ll still have to wait and see if the fully calibrated data from the Chandrayaan-3 team confirms an elevated sulfur concentration.
Atmospheric sulfur formation
The measurement of sulfur is interesting to scientists for at least two reasons. First, these findings indicate that the highland soils at the lunar poles could have fundamentally different compositions, compared with highland soils at the lunar equatorial regions. This compositional difference likely comes from the different environmental conditions between the two regions – the poles get less direct sunlight.
Second, these results suggest that there’s somehow more sulfur in the polar regions. Sulfur concentrated here could have formed[26] from the exceedingly thin lunar atmosphere.
The polar regions of the Moon receive less direct sunlight and, as a result, experience extremely low temperatures[27] compared with the rest of the Moon. If the surface temperature falls, below -73 degrees C (-99 degrees F), then sulfur from the lunar atmosphere could collect on the surface in solid form – like frost on a window.
Sulfur at the poles could also have originated from ancient volcanic eruptions[28] occurring on the lunar surface, or from meteorites containing sulfur that struck the surface and vaporized on impact.
Lunar sulfur as a resource
For long-lasting space missions, many agencies have thought about building some sort of base on the Moon[29]. Astronauts and robots could travel from the south pole base to collect, process, store and use naturally occurring materials like sulfur on the Moon – a concept called in-situ resource utilization[30].
In-situ resource utilization means fewer trips back to Earth to get supplies and more time and energy spent exploring. Using sulfur as a resource, astronauts could build solar cells and batteries that use sulfur, mix up sulfur-based fertilizer and make sulfur-based concrete for construction[31].
Sulfur-based concrete[32] actually has several benefits compared with the concrete normally used in building projects on Earth[33].
For one, sulfur-based concrete hardens and becomes strong within hours rather than weeks, and it’s more resistant to wear[34]. It also doesn’t require water in the mixture, so astronauts could save their valuable water for drinking, crafting breathable oxygen and making rocket fuel.
While seven missions[35] are currently operating on or around the Moon, the lunar south pole region[36] hasn’t been studied from the surface before, so Pragyan’s new measurements will help planetary scientists understand the geologic history of the Moon. It’ll also allow lunar scientists like me to ask new questions about how the Moon formed and evolved.
For now, the scientists at Indian Space Research Organization are busy processing and calibrating the data. On the lunar surface, Chandrayaan-3 is hibernating through the two-week-long lunar night, where temperatures will drop to -184 degrees F (-120 degrees C). The night will last until September 22.
There’s no guarantee that the lander component of Chandrayaan-3, called Vikram, or Pragyan will survive the extremely low temperatures, but should Pragyan awaken, scientists can expect more valuable measurements.
Spyware can infect your phone or computer via the ads you see online – report
Each day, you leave digital traces of what you did, where you went, who you communicated with, what you bought, what you’re thinking of buying, and much more. This mass of data serves as a library of clues for personalized ads, which are sent to you by a sophisticated network – an automated marketplace[1] of advertisers, publishers and ad brokers that operates at lightning speed.
The ad networks are designed to shield your identity, but companies and governments are able to combine that information with other data, particularly phone location, to identify you and track your movements and online activity[2]. More invasive yet is spyware[3] – malicious software that a government agent, private investigator or criminal installs on someone’s phone or computer without their knowledge or consent. Spyware lets the user see the contents of the target’s device, including calls, texts, email and voicemail. Some forms of spyware can take control of a phone, including turning on its microphone and camera.
Now, according to an investigative report[4] by the Israeli newspaper Haaretz, an Israeli technology company called Insanet has developed the means of delivering spyware via online ad networks, turning some targeted ads into Trojan horses. According to the report, there’s no defense against the spyware, and the Israeli government has given Insanet approval to sell the technology.
Sneaking in unseen
Insanet’s spyware, Sherlock, is not the first spyware that can be installed on a phone without the need to trick the phone’s owner into clicking on a malicious link or downloading a malicious file. NSO[5]’s iPhone-hacking Pegasus[6], for instance, is one of the most controversial spyware tools to emerge in the past five years.
Pegasus relies on vulnerabilities in Apple’s iOS, the iPhone operating system, to infiltrate a phone undetected. Apple issued a security update[7] for the latest vulnerability[8] on Sept. 7, 2023.
What sets Insanet’s Sherlock apart from Pegasus is its exploitation of ad networks rather than vulnerabilities in phones. A Sherlock user creates an ad campaign that narrowly focuses on the target’s demographic and location, and places a spyware-laden ad with an ad exchange. Once the ad is served to a web page that the target views, the spyware is secretly installed on the target’s phone or computer.
Although it’s too early to determine the full extent of Sherlock’s capabilities and limitations, the Haaretz report found that it can infect Windows-based computers and Android phones as well as iPhones.
Spyware vs. malware
Ad networks have been used to deliver malicious software for years, a practice dubbed malvertising[10]. In most cases, the malware is aimed at computers rather than phones, is indiscriminate, and is designed to lock a user’s data as part of a ransomware attack or steal passwords to access online accounts or organizational networks. The ad networks constantly scan for malvertising and rapidly block it when detected.
Spyware, on the other hand, tends to be aimed at phones, is targeted at specific people or narrow categories of people, and is designed to clandestinely obtain sensitive information and monitor someone’s activities. Once spyware infiltrates your system[11], it can record keystrokes, take screenshots and use various tracking mechanisms before transmitting your stolen data to the spyware’s creator.
While its actual capabilities are still under investigation, the new Sherlock spyware is at least capable of infiltration, monitoring, data capture and data transmission, according to the Haaretz report.
Who’s using spyware
From 2011 to 2023, at least 74 governments engaged in contracts with commercial companies to acquire spyware or digital forensics technology[12]. National governments might deploy spyware for surveillance and gathering intelligence as well as combating crime and terrorism. Law enforcement agencies might similarly use spyware as part of investigative efforts[13], especially in cases involving cybercrime, organized crime or national security threats.
Companies might use spyware to monitor employees’ computer activities[14], ostensibly to protect intellectual property, prevent data breaches or ensure compliance with company policies. Private investigators might use spyware to gather information and evidence for clients[15] on legal or personal matters. Hackers and organized crime figures might use spyware to steal information to use in fraud or extortion schemes[16].
On top of the revelation that Israeli cybersecurity firms have developed a defense-proof technology that appropriates online advertising for civilian surveillance, a key concern is that Insanet’s advanced spyware was legally authorized by the Israeli government for sale to a broader audience. This potentially puts virtually everyone at risk.
The silver lining is that Sherlock appears to be expensive to use. According to an internal company document cited in the Haaretz report, a single Sherlock infection costs a client of a company using the technology a hefty US$6.4 million.
NASA's Mars rovers could inspire a more ethical future for AI
Since ChatGPT’s release in late 2022, many news outlets have reported on the ethical threats posed by artificial intelligence. Tech pundits have issued warnings of killer robots bent on human extinction[1], while the World Economic Forum predicted that machines will take away jobs[2].
The tech sector is slashing its workforce[3] even as it invests in AI-enhanced productivity tools[4]. Writers and actors in Hollywood are on strike[5] to protect their jobs and their likenesses[6]. And scholars continue to show how these systems heighten existing biases[7] or create meaningless jobs – amid myriad other problems.
There is a better way to bring artificial intelligence into workplaces. I know, because I’ve seen it, as a sociologist[8] who works with NASA’s robotic spacecraft teams.
The scientists and engineers I study are busy exploring the surface of Mars[9] with the help of AI-equipped rovers. But their job is no science fiction fantasy. It’s an example of the power of weaving machine and human intelligence together, in service of a common goal.
Instead of replacing humans, these robots partner with us to extend and complement human qualities. Along the way, they avoid common ethical pitfalls and chart a humane path for working with AI.
The replacement myth in AI
Stories of killer robots and job losses illustrate how a “replacement myth” dominates the way people think about AI. In this view, humans can and will be replaced by automated machines[11].
Amid the existential threat is the promise of business boons like greater efficiency[12], improved profit margins[13] and more leisure time[14].
Empirical evidence shows that automation does not cut costs. Instead, it increases inequality by cutting out low-status workers[15] and increasing the salary cost[16] for high-status workers who remain. Meanwhile, today’s productivity tools inspire employees to work more[17] for their employers, not less.
Alternatives to straight-out replacement are “mixed autonomy” systems, where people and robots work together. For example, self-driving cars must be programmed[18] to operate in traffic alongside human drivers. Autonomy is “mixed” because both humans and robots operate in the same system, and their actions influence each other.
However, mixed autonomy is often seen as a step along the way to replacement[20]. And it can lead to systems where humans merely feed, curate or teach AI tools[21]. This saddles humans with “ghost work[22]” – mindless, piecemeal tasks that programmers hope machine learning will soon render obsolete.
Replacement raises red flags for AI ethics. Work like tagging content to train AI[23] or scrubbing Facebook posts[24] typically features traumatic tasks[25] and a poorly paid workforce[26] spread across[27] the Global South[28]. And legions of autonomous vehicle designers are obsessed with “the trolley problem[29]” – determining when or whether it is ethical to run over pedestrians.
But my research with robotic spacecraft teams at NASA[30] shows that when companies reject the replacement myth and opt for building human-robot teams instead, many of the ethical issues with AI vanish.
Extending rather than replacing
Strong human-robot teams[31] work best when they extend and augment[32] human capabilities instead of replacing them. Engineers craft machines that can do work that humans cannot. Then, they weave machine and human labor together intelligently, working toward a shared goal[33].
Often, this teamwork means sending robots to do jobs that are physically dangerous for humans. Minesweeping[34], search-and-rescue[35], spacewalks[36] and deep-sea[37] robots are all real-world examples.
Teamwork also means leveraging the combined strengths of both robotic and human senses or intelligences[38]. After all, there are many capabilities that robots have that humans do not – and vice versa.
For instance, human eyes on Mars can only see dimly lit, dusty red terrain stretching to the horizon. So engineers outfit Mars rovers with camera filters[39] to “see” wavelengths of light that humans can’t see in the infrared, returning pictures in brilliant false colors[40].
Meanwhile, the rovers’ onboard AI cannot generate scientific findings. It is only by combining colorful sensor results with expert discussion that scientists can use these robotic eyes to uncover new truths about Mars[42].
Respectful data
Another ethical challenge to AI is how data is harvested and used. Generative AI is trained on artists’ and writers’ work without their consent[43], commercial datasets are rife with bias[44], and ChatGPT “hallucinates”[45] answers to questions.
The real-world consequences of this data use in AI range from lawsuits[46] to racial profiling[47].
Robots on Mars also rely on data, processing power and machine learning techniques to do their jobs. But the data they need is visual and distance information to generate driveable pathways[48] or suggest cool new images[49].
By focusing on the world around them instead of our social worlds, these robotic systems avoid the questions around surveillance[50], bias[51] and exploitation[52] that plague today’s AI.
The ethics of care
Robots can unite the groups[53] that work with them by eliciting human emotions when integrated seamlessly. For example, seasoned soldiers mourn broken drones on the battlefield[54], and families give names and personalities to their Roombas[55].
I saw NASA engineers break down in anxious tears[56] when the rovers Spirit and Opportunity were threatened by Martian dust storms.
Unlike anthropomorphism[58] – projecting human characteristics onto a machine – this feeling is born from a sense of care for the machine. It is developed through daily interactions, mutual accomplishments and shared responsibility.
When machines inspire a sense of care, they can underline – not undermine – the qualities that make people human.
A better AI is possible
In industries where AI could be used to replace workers, technology experts might consider how clever human-machine partnerships could enhance human capabilities instead of detracting from them.
Script-writing teams may appreciate an artificial agent that can look up dialog or cross-reference on the fly. Artists could write or curate their own algorithms to fuel creativity[59] and retain credit for their work. Bots to support software teams might improve meeting communication and find errors that emerge from compiling code.
Of course, rejecting replacement does not eliminate all ethical concerns[60] with AI. But many problems associated with human livelihood, agency and bias shift when replacement is no longer the goal.
The replacement fantasy is just one of many possible futures for AI and society. After all, no one would watch “Star Wars” if the ‘droids replaced all the protagonists. For a more ethical vision of humans’ future with AI, you can look to the human-machine teams that are already alive and well, in space and on Earth.
Read more https://theconversation.com/nasas-mars-rovers-could-inspire-a-more-ethical-future-for-ai-211162
Depression recovery can be hard to measure − new research on deep brain stimulation shows how objective biomarkers could help make treatment more precise
It can be challenging to create a treatment plan for depression. This is especially true for patients who aren’t responding to conventional treatments[1] and are undergoing experimental therapies such as deep brain stimulation. For most medical conditions, doctors can directly measure the part of the body that is being treated, such as blood pressure for cardiovascular disease. These measurable changes serve as an objective biomarker of recovery that provides valuable information about how to care for these patients.
On the other hand, for depression and other psychiatric disorders, clinicians rely on subjective and nonspecific surveys[2] that ask patients about their symptoms. When a patient tells their doctor they are experiencing negative emotions, is that because they are relapsing in their depression or because they had a bad day like everyone does sometimes? Are they anxious because their depression symptoms have lessened enough that they are experiencing new feelings, or do they have some other medical problem independent of their depression? Each reason may indicate a different course of action, such as altering a medication, addressing an issue in psychotherapy or increasing the intensity of brain stimulation[3] treatment.
We are[4] neuroengineers[5]. In our study, newly published in Nature, we identified potential biomarkers[6] for deep brain stimulation that could one day help guide clinicians and patients when making treatment decisions for those using this approach to alleviate treatment-resistant depression.
Biomarker for depression
Clinical depression does not respond to available therapies in a significant number of patients. Researchers have been working to find alternative options for those with treatment-resistant depression[7], and many decades of experiments have identified specific brain networks with abnormal electrical activity in those with depression.
This notion of depression as abnormal brain activity rather than a chemical imbalance led to the development of deep brain stimulation[8] as a depression treatment: a surgically implanted, pacemaker-like device that delivers electrical impulses to certain areas of the brain. Studies testing this technique have found that it can decrease depression severity[9] over time in most patients.
Our research team wanted to find specific changes in brain activity that could serve as a biomarker that objectively measures how well deep brain stimulation is helping patients with depression. So we monitored the brain activity[10] of 10 patients receiving deep brain stimulation for severe treatment-resistant depression over six months.
At the end of six months, 90% of the patients responded to the therapy – defined by a reduction of symptoms by at least a half – and 70% were in remission, meaning they no longer met the criteria for clinical depression.
To identify a potential biomarker, we developed an algorithm that looked for patterns in brain activity changes as patients recovered. The algorithm was based on data from six out of the original 10 patients who had usable data from the experiment. We found that there are coordinated changes in different frequencies[11] present in the electrical activity within the area of the brain being stimulated. Using these patterns, the algorithm was able to predict whether someone was in a stable recovery with 90% accuracy each week.
Interestingly, we observed some parts of this pattern moved in the[12] opposite direction[13] later in stimulation therapy compared with the patterns at the start of therapy. This finding provides evidence that the long-term recovery is due to the brain adapting to the stimulation in a process called plasticity[14] rather than as a direct effect of the stimulation itself.
We also saw other potential biomarkers worth investigating further.
For example, abnormalities in brain imaging taken before implanting the electrodes in specific parts of the brain correlated with how sick each patient was. This could provide clues about what’s causing depression in some people, or help develop imaging methods to determine who might be a good candidate for deep brain stimulation.
For another example, we found that the facial expressions of patients changed as their brains changed over the course of their treatment. While physicians often report this anecdotally, quantifying these changes may provide a way to develop objective markers of recovery that incorporate a patient’s behavior with their brain signals.
Because the results of our study are based on a small sample of patients, it’s important to further investigate how broadly they can be applied to other patients and newer deep brain stimulation devices.
Improving decision-making for depression
Clinical depression is a debilitating condition that causes significant personal and societal suffering[16]. It is one of the largest contributors to the overall disease burden[17] of many countries. Despite the many approved treatments available, nearly 30% of the 8.9 million U.S. adults[18] taking medications for clinical depression continue to have symptoms.
Deep brain stimulation is one of the alternative therapies for treatment-resistant depression that researchers are investigating. Studies have shown that deep brain stimulation can offer effective and long-term relief[19] for some patients.
Although deep brain stimulation is an approved treatment for other conditions like Parkinson’s disease[20], it remains an experimental therapy for treatment-resistant depression. While the results from small experimental studies have been positive, they have not been successfully replicated in large-scale, randomized clinical trials[21] necessary for approval from the U.S. Food and Drug Administration.
Finding an objective biomarker that measures recovery in depression has the potential to improve treatment decisions. For example, one patient in our study had a relapse after several months of remission. Were a biomarker available at the time, the clinical team would have had warning that the patient was relapsing weeks before standard symptom surveys showed that anything was wrong. Such a tool could help clinicians intervene before a relapse becomes an emergency.
Your unique body odor could identify who you are and provide insights into your health – all from the touch of a hand

From the aroma of fresh-cut grass to the smell of a loved one, you encounter scents in every part of your life. Not only are you constantly surrounded by odor, you’re also producing it. And it is so distinctive that it can be used to tell you apart from everyone around you.
Your scent is a complex product influenced by many factors, including your genetics. Researchers believe that a particular group of genes, the major histocompatibility complex[1], play a large role in scent production. These genes are involved in the body’s immune response and are believed to influence body odor by encoding the production of specific proteins and chemicals.
But your scent isn’t fixed once your body produces it. As sweat, oils and other secretions make it to the surface of your skin, microbes break down and transform[2] these compounds, changing and adding to the odors that make up your scent. This scent medley emanates from your body and settles into the environments around you. And it can be used to track, locate or identify a particular person, as well as distinguish between healthy and unhealthy people.
We are[3] researchers who[4] specialize in[5] studying human scent through the detection and characterization of gaseous chemicals called volatile organic compounds[6]. These gases can relay an abundance of information for both forensic researchers and health care providers.
Science of body odor
When you are near another person, you can feel their body heat without touching them. You may even be able to smell them without getting very close. The natural warmth of the human body creates a temperature differential with the air around it. You warm up the air nearest to you, while air that’s farther away remains cool, creating warm currents of air[7] that surround your body.
Researchers believe that this plume of air helps disperse your scent by pushing the millions of skin cells you shed over the course of a day off your body and into the environment. These skin cells act as boats or rafts[8] carrying glandular secretions and your resident microbes – a combination of ingredients that emit your scent – and depositing them in your surroundings.
Your scent is composed of the volatile organic compounds present in the gases emitted from your skin[9]. These gases are the combination of sweat, oils and trace elements exuded from the glands in your skin. The primary components of your odor depend on internal factors such as your race, ethnicity, biological sex and other traits. Secondary components waver based on factors like stress, diet and illness. And tertiary components from external sources like perfumes and soaps build on top of your distinguishable odor profile.
Identity of scent
With so many factors influencing the scent of any given person, your body odor can be used as an identifying feature. Scent detection canines[10] searching for a suspect can look past all the other odors they encounter to follow a scent trail left behind by the person they are pursuing. This practice relies on the assumption that each person’s scent is distinct enough that it can be distinguished from other people’s.
Researchers have been studying the discriminating potential of human scent for over three decades. A 1988 experiment demonstrated that a dog could distinguish identical twins living apart[11] and exposed to different environmental conditions by their scent alone. This is a feat that could not be accomplished using DNA evidence, as identical twins share the same genetic code.
The field of human scent analysis has expanded over the years to further study the composition of human scent and how it can be used as a form of forensic evidence. Researchers have seen differences in human odor composition that can be classified based on sex, gender, race and ethnicity. Our research team’s 2017 study of 105 participants found that specific combinations[12] of 15 volatile organic compounds collected from people’s hands could distinguish between race and ethnicity with an accuracy of 72% for whites, 82% for East Asians and 67% for Hispanics. Based on a combination of 13 compounds, participants could be distinguished as male or female with an overall 80% accuracy.
Researchers are also producing models to predict the characteristics of a person based on their scent. From a sample pool of 30 women and 30 men, our team built a machine learning model[13] that could predict a person’s biological sex with 96% accuracy based on hand odor.
Scent of health
Odor research continues to provide insights into illnesses. Well-known examples of using scent in medical assessments include seizure and diabetic alert canines[14]. These dogs can give their handlers time to prepare for an impending seizure or notify them when they need to adjust their blood glucose levels.
While these canines often work with a single patient known to have a condition that requires close monitoring, medical detection dogs can also indicate whether someone is ill. For example, researchers have shown that dogs can be trained to detect cancer[15] in people. Canines have also been trained to detect COVID-19 infections[16] at a 90% accuracy rate.
Similarly, our research team found that a laboratory analysis of hand odor samples[17] could discriminate between people who are COVID-19 positive or negative with 75% accuracy.
Forensics of scent
Human scent offers a noninvasive method to collect samples. While direct contact with a surface like touching a doorknob or wearing a sweater provides a clear route for your scent to transfer to that surface, simply standing still will also transfer your odor into the surrounding area.
Although human scent has the potential to be a critical form of forensic evidence, it is still a developing field. Imagine a law enforcement officer collecting a scent sample from a crime scene in hopes that it may match with a suspect.
Further research into human scent analysis can help fill the gaps in our understanding of the individuality of human scent and how to apply this information in forensic and biomedical labs.