How much time do kids spend on devices – playing games, watching videos, texting and using the phone?
Why Google, Bing and other search engines' embrace of generative AI threatens $68 billion SEO industry
Google, Microsoft and others boast that generative artificial intelligence tools like ChatGPT will make searching the internet[1] better than ever for users[2]. For example, rather than having to wade through a sea of URLs, users will be able to just get an answer combed from the entire internet.
There are also some concerns with the rise of AI-fueled search engines[3], such as the opacity over where information comes from, the potential for “hallucinated” answers and copyright issues.
But one other consequence is that I believe it may destroy the US$68 billion search engine optimization[4] industry that companies like Google helped create.
For the past 25 years or so, websites, news outlets, blogs and many others with a URL that wanted to get attention have used search engine optimization[5], or SEO, to “convince” search engines to share their content as high as possible in the results they provide to readers. This has helped drive traffic to their sites and has also spawned an industry of consultants and marketers who advise on how best to do that.
As an associate professor of information and operations management[6], I study the economics of e-commerce. I believe the growing use of generative AI will likely make all of that obsolete.
How online search works
Someone seeking information online opens her browser, goes to a search engine and types in the relevant keywords. The search engine displays the results, and the user browses through the links displayed in the result listings until she finds the relevant information.
To attract the user’s attentions, online content providers use various search engine marketing strategies, such as search engine optimization[7], paid placements[8] and banner displays[9].
For instance, a news website might hire a consultant to help it highlight key words in headlines and in metadata so that Google and Bing elevate its content when a user searches for the latest information on a flood or political crisis.
How generative AI changes search process
But this all depends on search engines luring tens of millions of users to their websites. And so to earn users’ loyalty and web traffic, search engines must continuously work on their algorithms to improve the quality of their search results.
That’s why, even if it could hurt a part of their revenue stream, search engines have been quick to experiment with generative AI[11] to improve search results. And this could fundamentally change the online search ecosystem.
All the biggest search engines have already adopted or are experimenting with this approach. Examples include Google’s Bard[12], Microsoft’s Bing AI[13], Baidu’s ERNIE[14] and DuckDuckGo’s DuckAssist[15].
Rather than getting a list of links, both organic and paid, based on whatever keywords or questions a user types in, generative AI will instead simply give you a text result[16] in the form of an answer. Say you’re planning a trip to Destin, Florida, and type the prompt “Create a three-day itinerary for a visitor” there. Instead of a bunch of links to Yelp and blog postings that require lots of clicking and reading, typing that into Bing AI will result in a detailed three-day itinerary.
Over time, as the quality of AI-generated answers improve, users will have less incentive to browse through search result listings. They can save time and effort by reading the AI-generated response to their query.
In other words, it would allow you to bypass all those paid links and costly efforts by websites to improve their SEO scores, rendering them useless.
When users start ignoring the sponsored and editorial result listings, this will have an adverse impact on the revenues of SEO consultants, search engine marketers[17] consultants and, ultimately, the bottom line of search engines themselves.
The financial impact
This financial impact cannot be ignored.
For example, the SEO industry generated $68.1 billion globally in 2022[18]. It had been expected to reach $129.6 billion by 2030, but these projections were made before the emergence of generative AI put the industry at risk of obsolescence.
As for search engines[19], monetizing online search services is a major source of their revenue[20]. They get a cut of the money that websites spend on improving their online visibility through paid placements, ads, affiliate marketing and the like, collectively known as search engine marketing. For example, approximately 58% of Google’s 2022 revenues[21] – or almost $162.5 billion – came from Google Ads, which provides some of these services.
Search engines run by massive companies with many revenue streams, like Google and Microsoft, will likely find ways to offset the losses by coming up with strategies to make money off generative AI answers. But the SEO marketers and consultants who depend on search engines – mostly small- and medium-sized companies[22] – will no longer be needed as they are today, and so the industry is unlikely to survive much longer.
A not-too-distant future
But don’t expect the SEO industry to fade away immediately. Generative AI search engines are still in their infancy and must address certain challenges before they’ll dominate search.
For one thing, most of these initiatives are still experimental[23] and often available only to certain users. And for another, generative AI has been notorious for providing incorrect[24], plagiarized[25] or simply made-up answers[26].
That means it’s unlikely at the moment to gain the trust or loyalty of many users.
Given these challenges, it is not surprising that generative AI has yet to transform online search[27]. However, given the resources available to researchers working on generative AI models, it is safe to assume that eventually these models will become better at their task, leading to the death of the SEO industry.
Quantum dots − a new Nobel laureate describes the development of these nanoparticles from basic research to industry application
The Nobel Prize in chemistry for 2023 goes to three scientists “for the discovery and synthesis of quantum dots[1].” The Conversation Weekly podcast[2] caught up with one of this trio, physical chemist Louis Brus[3], who did foundational work figuring out that the properties of these nanoparticles depend on their size. Brus’ phone was off when the Nobel reps called to inform him of the good news, but now plenty of people have gotten through with congratulations and advice. Below are edited excerpts from the podcast.
When you were working at Bell Labs in the 1980s and discovered quantum dots, it was something of an accident. You were studying solutions of semiconductor particles. And when you aimed lasers at these solutions, called colloids, you noticed that the colors they emitted were not constant.
On the first day we made the colloid, sometimes the spectrum was different. Second and third day, it was normal. There certainly was a surprise when I first saw this change in the spectrum. And so, I began to try to figure out what the heck was going on with that.
I noticed that the property of the particle itself began to change at a very small size.
What you’d found was a quantum dot: a type of nanoparticle that absorbs light and emits it at another wavelength. Crucially, the color of these particles changes depending on the actual size of the particle. How do you even see a quantum dot crystal, since one is just a few hundred thousandths the width of a human hair?
Well, you can’t see them with an optical microscope because they’re smaller than the wavelength of light. There are ways to see them too, using other types of specialist microscopes, such as an electron microscope. And a common way of demonstrating them is to line up a row of brightly colored glass flasks each with a solution of different sized quantum dots inside it.
One of your fellow laureates, Alexei Ekimov[6], was a Russian scientist, and he’d actually observed quantum dots in colored glass, but you weren’t aware of his findings at the time?
Yes, that’s right. The Cold War was going on at that time, and he published in the Russian literature, in Russian. And he wasn’t allowed to travel to the West to talk about his work.
I asked around among all the physicists, was there any work on small particles? I was trying to make a model of the quantum size effects. And they told me no, nobody’s really working on this. Nobody had seen his articles, basically.
I was part of the U.S. chemistry community, doing synthetic chemistry in the laboratory. He was in the glass industry in the Soviet Union, working on industrial technology.
When I eventually found his articles in the technological literature, I wrote a letter to the Soviet Union, with my papers, just to say hello to Ekimov and his colleagues. When the letter came, the KGB came to talk to the Russian scientists, trying to figure out why they had any contact with anybody in the West. But in fact they had never talked to me or anyone in the West when my letter arrived in the mail.
Have you met him since?
Yes, they were able to come out of the Soviet Union during Glasnost, this would be the late 1980s. There’s Ekimov, and then there is his theoretical collaborator Sasha Efros[7], who now works at the U.S. Naval Research Lab[8]. I met them as soon as they came to the U.S.
Listen to the interview with Louis Brus on The Conversation Weekly podcast. Each week, academic experts tell us about the fascinating discoveries they’re making to understand the world and the big questions they’re still trying to answer.
One of the issues with quantum dots, when you first observed them, was how to actually produce them and keep them stable. Then, in the 1990s, your fellow laureate, Moungi Bawendi[9], figured this out. What do you think is the most striking thing that you’ve seen quantum dots used in so far?
Usually when a new material is invented, it takes a long time to figure out what it’s really good for. Research scientists, they have ideas, you might use it for this, you might use it for that. But then, if you talk to people in the actual industry, who deal every day with manufacturing problems, these ideas are often not very good.
But the knowledge that we gained, the scientific principles, could be used to help to design new devices.
As far as first applications, people began to try to use them in biological imaging. Biochemists attach quantum dots to other molecules to help map cells and organs. They’ve even been used to detect tumors, and to help guide surgeons during operations.
And as scientists kept working to synthesize quantum dots, the quality of the particles kept improving. They were emitting pure colors, rather than distributions of light – like maybe red with a little bit of green, or maybe red with some pink. When you got a better particle, it would be just pure red, for instance.
So then people made the connection to the display industry – computer displays and television displays. In this application, you want to convert electricity into three colors: red, green and blue. You can make up any kind of image, starting with just those three colors in different proportions.
It takes a lot of courage. You have to invest a lot of money to develop the technology, and maybe at the end of it, it’s not good enough, and it will not replace what you already have. And there’s a lot of credit due to the Samsung Corporation in Japan. Hundreds of billions of dollars were invested in the technology of these particles to get them to the point where they could begin to manufacture displays and flat-panel TVs using quantum dots.
Your work is an example of the importance of basic research, of being curious, trying to solve mysteries without a particular endpoint or industrial application in sight. What message would you have for a young chemist starting out today working on such basic research?
The world is a huge place, and you could do basic research in a huge number of different areas. You want to pick a problem where, if you are spectacularly successful and you actually discover something really interesting, it might have some application in the world.
For better or for worse, you have to make a choice in the beginning, and it takes some intuition.
A good way to do it is you pick a subject that you know is important to technology, but there’s no understanding of the science at the present time. It’s a complete black box. Nobody understands the basic principles. That kind of problem, you can begin to take it apart and look to see what the basic steps are.
What changes for you now that you’ve won the Nobel Prize?
Well, this Nobel Prize, for better or for worse, has a special meaning in people’s minds all over the world. Yesterday when the mailman came I happened to be at the front door and he recognized me because my face was in the local newspaper. And he said, “I’ve never shaken the hand of a Nobel laureate before.”
For better or for worse, this is where I am right now, in a special category whether I like it or not. I still have my office in the university, but I don’t have a research group. I’m trying to leave that to the younger people. So this recognition probably means less for my research than it would if I was 40 years old.
I have received congratulations by email from a number of people who won the prize in past years. Their main recommendation is you must learn to say no. People will ask you to do all kinds of crazy things, and your time will be entirely taken up with these honorific university visits and giving little speeches. In order to have a real life and to be productive, you have to say no to all of these extraneous invitations.
And they also told me to have fun in Sweden! It’s an extremely elaborate schedule of events for that week in December when this award ceremony is. Extremely fancy. American culture, physics culture is different – if you win a prize from the American Physical Society, it’s a very low-key event. You just show up in an auditorium. It’s not even necessary to wear a suit.
So I will take my family, my grandchildren to Sweden and we’ll try to enjoy this as a great vacation.
Rancid food smells and tastes gross − AI tools may help scientists prevent that spoilage
Have you ever bitten into a nut or a piece of chocolate, expecting a smooth, rich taste, only to encounter an unexpected and unpleasant chalky or sour flavor? That taste is rancidity in action, and it affects pretty much every product in your pantry. Now artificial intelligence[1] can help scientists tackle this issue more precisely and efficiently.
We’re a group of chemists who study ways to extend the life of food products, including those that go rancid. We recently published a study[2] describing the advantages of AI tools to help keep oil and fat samples fresh for longer. Because oils and fats are common components in many food types, including chips, chocolate and nuts, the outcomes of the study could be broadly applied and even affect other areas, including cosmetics and pharmaceuticals.
Rancidity and antioxidants
Food goes rancid[3] when it’s exposed to the air for a while – a process called oxidation. In fact, many common ingredients, but especially lipids[4], which are fats and oils, react with oxygen. The presence of heat or UV light can accelerate the process.
Oxidation leads to the formation of smaller molecules such as ketones[5], aldehydes[6] and fatty acids[7] that give rancid foods a characteristic rank, strong and metallic scent. Repeatedly consuming rancid foods can threaten your health[8].
Fortunately, both nature and the food industry have an excellent shield against rancidity – antioxidants.
Antioxidants[9] include a broad range of natural molecules, like vitamin C, and synthetic molecules capable of protecting your food from oxidation.
While there are a few ways antioxidants work[10], overall they can neutralize many of the processes that cause rancidity and preserve the flavors and nutritional value of your food for longer. Most often, customers don’t even know they are consuming added antioxidants, as food manufacturers typically add them in small amounts during preparation.
But you can’t just sprinkle some vitamin C on your food and expect to see a preservative effect. Researchers have to carefully choose a specific set of antioxidants[11] and precisely calculate the amount of each.
Combining antioxidants does not always strengthen their effect. In fact, there are cases in which using the wrong antioxidants, or mixing them with the wrong ratios, can decrease their protective effect – that’s called antagonism[12]. Finding out which combinations work for which types of food requires many experiments, which are time-consuming, require specialized personnel and increase the food’s overall cost.
Exploring all possible combinations would require an enormous amount of time and resources, so researchers are stuck with a few mixtures that provide only some level of protection against rancidity. Here’s where AI comes into play.
A use for AI
You’ve probably seen AI tools like ChatGPT[13] in the news or played around with them yourself. These types of systems can take in big sets of data[14] and identify patterns, then generate an output that could be useful to the user.
As chemists, we wanted to teach an AI tool how to look for new combinations of antioxidants. For this, we selected a type of AI capable of working with textual representations[15], which are written codes describing the chemical structure of each antioxidant. First, we fed our AI a list of about a million chemical reactions and taught the program some simple chemistry concepts, like how to identify important features of molecules.
Once the machine could recognize general chemical patterns, like how certain molecules react with each other, we fine-tuned it by teaching it some more advanced chemistry. For this step, our team used a database of almost 1,100 mixtures previously described in the research literature.
At this point, the AI could predict the effect of combining any set of two or three antioxidants in under a second. Its prediction aligned with the effect described in the literature 90% of the time.
But these predictions didn’t quite align with the experiments our team performed in the lab. In fact, we found that our AI was able to correctly predict only a few of the oxidation experiments we performed with real lard, which shows the complexities of transferring results from a computer to the lab.
Refining and enhancing
Luckily, AI models aren’t static tools with predefined yes and no pathways. They’re dynamic learners[16], so our research team can continue feeding the model new data until it sharpens its predictive capabilities and can accurately predict the effect of each antioxidant combination. The more data the model gets, the more accurate it becomes, much like how humans grow through learning.
We found that adding about 200 examples from the lab enabled the AI to learn enough chemistry to predict the outcomes of the experiments performed by our team, with only a slight difference between the predicted and the real value.
A model like ours may be able to assist scientists developing better ways to preserve food by coming up with the best antioxidant combinations for the specific foods they’re working with, kind of like having a very clever assistant.
The project is now exploring more effective ways to train the AI model and looking for ways to further improve its predictive capabilities.
New technique uses near-miss particle physics to peer into quantum world − two physicists explain how they are measuring wobbling tau particles
One way physicists seek clues to unravel the mysteries of the universe is by smashing matter together and inspecting the debris. But these types of destructive experiments, while incredibly informative, have limits.
We are two scientists who study nuclear[1] and particle physics[2] using CERN’s Large Hadron Collider near Geneva, Switzerland. Working with an international group of nuclear and particle physicists, our team realized that hidden in the data from previous studies was a remarkable and innovative experiment.
In a new paper published in Physical Review Letters, we developed a new method with our colleagues for measuring how fast a particle called the tau wobbles[3].
Our novel approach looks at the times incoming particles in the accelerator whiz by each other rather than the times they smash together in head-on collisions. Surprisingly, this approach enables far more accurate measurements of the tau particle’s wobble than previous techniques. This is the first time in nearly 20 years scientists have measured this wobble, known as the tau magnetic moment[4], and it may help illuminate tantalizing cracks emerging in the known laws of physics[5].
Why measure a wobble?
Electrons, the building blocks of atoms, have two heavier cousins called the muon and the tau[7]. Taus are the heaviest in this family of three and the most mysterious, as they exist only for minuscule amounts of time.
Interestingly, when you place an electron, muon or tau inside a magnetic field, these particles wobble in a manner similar to how a spinning top wobbles on a table. This wobble is called a particle’s magnetic moment. It is possible to predict how fast these particles should wobble using the Standard Model of particle physics[8] – scientists’ best theory of how particles interact.
Since the 1940s, physicists have been interested in measuring magnetic moments to reveal intriguing effects in the quantum world[9]. According to quantum physics, clouds of particles and antiparticles are constantly popping in and out of existence[10]. These fleeting fluctuations slightly alter how fast electrons, muons and taus wobble inside a magnetic field. By measuring this wobble very precisely, physicists can peer into this cloud to uncover possible hints of undiscovered particles.
Testing electrons, muons and taus
In 1948, theoretical physicist Julian Schwinger first calculated how the quantum cloud alters the electron’s magnetic moment[12]. Since then, experimental physicists have measured the speed of the electron’s wobble to an extraordinary 13 decimal places[13].
The heavier the particle, the more its wobble will change because of undiscovered new particles lurking in its quantum cloud. Since electrons are so light, this limits their sensitivity to new particles.
Muons and taus are much heavier but also far shorter-lived than electrons. While muons exist only for mere microseconds, scientists at Fermilab near Chicago measured the muon’s magnetic moment to 10 decimal places[14] in 2021. They found that muons wobbled noticeably faster than Standard Model predictions, suggesting unknown particles may be appearing in the muon’s quantum cloud.
Taus are the heaviest particle of the family – 17 times more massive than a muon and 3,500 times heavier than an electron. This makes them much more sensitive to potentially undiscovered particles[15] in the quantum clouds. But taus are also the hardest to see, since they live for just a millionth of the time a muon exists.
To date, the best measurement of the tau’s magnetic moment was made in 2004 using a now-retired electron collider[16] at CERN. Though an incredible scientific feat, after multiple years of collecting data that experiment could measure the speed of the tau’s wobble to only two decimal places[17]. Unfortunately, to test the Standard Model, physicists would need a measurement 10 times as precise[18].
Lead ions for near-miss physics
Since the 2004 measurement of the tau’s magenetic moment, physicists have been seeking new ways to measure the tau wobble.
The Large Hadron Collider usually smashes the nuclei of two atoms together – that is why it is called a collider. These head-on collisions create a fireworks display of debris[20] that can include taus, but the noisy conditions preclude careful measurements of the tau’s magnetic moment.
From 2015 to 2018, there was an experiment at CERN that was designed primarily to allow nuclear physicists to study exotic hot matter[21] created in head-on collisions. The particles used in this experiment were lead nuclei that had been stripped of their electrons – called lead ions. Lead ions are electrically charged and produce strong electromagnetic fields[22].
The electromagnetic fields of lead ions contain particles of light called photons. When two lead ions collide, their photons can also collide and convert all their energy into a single pair of particles. It was these photon collisions that scientists used to measure muons[23].
These lead ion experiments ended in 2018, but it wasn’t until 2019 that one of us, Jesse Liu, teamed up with particle physicist Lydia Beresford in Oxford, England, and realized the data from the same lead ion experiments could potentially be used to do something new: measure the tau’s magnetic moment.
This discovery was a total surprise[24]. It goes like this: Lead ions are so small that they often miss each other in collision experiments. But occasionally, the ions pass very close to each other without touching. When this happens, their accompanying photons can still smash together while the ions continue flying on their merry way.
These photon collisions can create a variety of particles – like the muons in the previous experiment, and also taus. But without the chaotic fireworks produced by head-on collisions, these near-miss events are far quieter and ideal for measuring traits of the elusive tau.
Much to our excitement, when the team looked back at data from 2018, indeed these lead ion near misses were creating tau particles. There was a new experiment hidden in plain sight!
First measurement of tau wobble in two decades
In April 2022, the CERN team announced that we had found direct evidence of tau particles created[27] during lead ion near misses. Using that data, the team was also able to measure the tau magnetic moment – the first time such a measurement had been done since 2004. The final results were published on Oct. 12, 2023.
This landmark result measured the tau wobble to two decimal places. Much to our astonishment, this method tied the previous best measurement using only one month of data recorded in 2018.
After no experimental progress for nearly 20 years, this result opens an entirely new and important path toward the tenfold improvement in precision needed to test Standard Model predictions. Excitingly, more data is on the horizon.
The Large Hadron Collider just restarted lead ion data collection on Sept. 28, 2023[28], after routine maintenance and upgrades. Our team plans to quadruple the sample size of lead ion near-miss data by 2025. This increase in data will double the accuracy of the measurement of the tau magnetic moment, and improvements to analysis methods may go even further.
Tau particles are one of physicists’ best windows to the enigmatic quantum world, and we are excited for surprises that upcoming results may reveal about the fundamental nature of the universe.