Quantcast
Channel: Science + Tech – The Conversation

Could a telescope ever see the beginning of time? An astronomer explains

$
0
0
Thousands of galaxies, each containing billions of stars, are in this 2022 photo taken by the James Webb Space Telescope.NASA/ESA/CSA/STScI

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to curiouskidsus@theconversation.com.


If the James Webb telescope was 10 times more powerful, could we see the beginning of time? - Sam H., age 12, Prosper, Texas


The James Webb Space Telescope, or JWST for short, is one of the most advanced telescopes ever built. Planning for JWST began over 25 years ago, and construction efforts spanned over a decade. It was launched into space on Dec. 25, 2021, and within a month arrived at its final destination: 930,000 miles away from Earth. Its location in space allows it a relatively unobstructed view of the universe.

The telescope design was a global effort, led by NASA and intended to push the boundaries of astronomical observation with revolutionary engineering. Its mirror is massive– about 21 feet (6.5 meters) in diameter. That’s nearly three times the size of the Hubble Space Telescope, which launched in 1990 and is still working today.

It’s a telescope’s mirror that allows it to collect light. JWST’s is so big that it can “see” the faintest and farthest galaxies and stars in the universe. Its state-of-the-art instruments can reveal information about the composition, temperature and motion of these distant cosmic objects.

As an astrophysicist, I’m continually looking back in time to see what stars, galaxies and supermassive black holes looked like when their light began its journey toward Earth, and I’m using that information to better understand their growth and evolution. For me, and for thousands of space scientists, the James Webb Space Telescope is a window to that unknown universe.

Just how far back can JWST peer into the cosmos and into the past? About 13.5 billion years.

Against the blackness of space, the golden mirrors of the telescope are prominent.
This illustration of the front view of the James Webb Space Telescope shows its sun shield and golden mirrors.NASA/ESA/CSA/Northrop Grumman

Time travel

A telescope does not show stars, galaxies and exoplanets as they are right now. Instead, astronomers are catching a glimpse of how they were in the past. It takes time for light to travel across space and reach our telescopes. In essence, that means a look into space is also a trip back in time.

This is even true for objects that are quite close to us. The light you see from the Sun left it about 8 minutes, 20 seconds earlier. That’s how long it takes for the Sun’s light to travel to Earth.

You can easily do the math on this. All light – whether sunlight, a flashlight or a light bulb in your house – travels at 186,000 miles (almost 300,000 kilometers) per second. That’s just over 11 million miles (about 18 million kilometers) per minute. The Sun is about 93 million miles (150 million kilometers) from Earth. That comes out to about 8 minutes, 20 seconds.

But the farther away something is, the longer its light takes to reach us. That’s why the light we see from Proxima Centauri, the closest star to us aside from our Sun, is 4 years old; that is, it’s about 25 trillion miles (approximately 40 trillion kilometers) away from Earth, so that light takes just over four years to reach us. Or, as scientists like to say, four light years.

Most recently, JWST observed Earendel, one of the farthest stars ever detected. The light that JWST sees from Earendel is about 12.9 billion years old.

The James Webb Space Telescope is looking much farther back in time than previously possible with other telescopes, such as the Hubble Space Telescope. For example, although Hubble can see objects 60,000 times fainter than the human eye is able, the JWST can see objects almost nine times fainter than even Hubble can.

A diagram that shows how far back the James Webb Space Telescope can see.
The James Webb Space Telescope can see back 13.5 billion years – back to when the first stars and galaxies began to form.STScI

The Big Bang

But is it possible to see back to the beginning of time?

The Big Bang is a term used to define the beginning of our universe as we know it. Scientists believe it occurred about 13.8 billion years ago. It is the most widely accepted theory among physicists to explain the history of our universe.

The name is a bit misleading, however, because it suggests that some sort of explosion, like fireworks, created the universe. The Big Bang more closely represents the appearance of rapidly expanding space everywhere in the universe. The environment immediately after the Big Bang was similar to a cosmic fog that covered the universe, making it hard for light to travel beyond it. Eventually, galaxies, stars and planets started to grow.

That’s why this era in the universe is called the “cosmic dark ages.” As the universe continued to expand, the cosmic fog began to rise, and light was eventually able to travel freely through space. In fact, a few satellites have observed the light left by the Big Bang, about 380,000 years after it occurred. These telescopes were built to detect the splotchy leftover glow from the Big Bang, whose light can be tracked in the microwave band.

However, even 380,000 years after the Big Bang, there were no stars and galaxies. The universe was still a very dark place. The cosmic dark ages wouldn’t end until a few hundred million years later, when the first stars and galaxies began to form.

Clouds of red, pink and white gas and dust highlight this starscape.
This is a JWST image of NGC 604, a star-forming region about 2.7 million light years from Earth.NASA/ESA/CSA/STScI

The James Webb Space Telescope was not designed to observe as far back as the Big Bang, but instead to see the period when the first objects in the universe began to form and emit light. Before this time period, there is little light for the James Webb Space Telescope to observe, given the conditions of the early universe and the lack of galaxies and stars.

Peering back to the time period close to the Big Bang is not simply a matter of having a larger mirror – astronomers have already done it using other satellites that observe microwave emission from very soon after the Big Bang. So, the James Webb Space Telescope observing the universe a few hundred million years after the Big Bang isn’t a limitation of the telescope. Rather, that’s actually the telescope’s mission. It’s a reflection of where in the universe we expect the first light from stars and galaxies to emerge.

By studying ancient galaxies, scientists hope to understand the unique conditions of the early universe and gain insight into the processes that helped them flourish. That includes the evolution of supermassive black holes, the life cycle of stars, and what exoplanets – worlds beyond our solar system– are made of.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

The Conversation

Adi Foord does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


Dali hit Key Bridge with the force of 66 heavy trucks at highway speed

$
0
0
The Baltimore bridge didn't stand a chance.AP Photo/Julia Nikhinson
A tile reading '26 milllion, pounds of force exerted by the Dali during its collision with the Francis Scott Key Bridge'
CC BY-ND

The cargo ship Dali knocked down three main truss spans, constructed with connected steel elements forming triangles, on the Francis Scott Key Bridge just seconds after crashing into one of the bridge piers early on Tuesday morning, March 26, 2024.

The bridge collapse happened so fast that it left little time for the work crews on the bridge to escape. Civil engineers like me have been paying attention to this disaster, because we want to find ways to make infrastructure like these large bridges more resilient. For a bridge this large to collapse would require a catastrophic collision force. But using some basic physics principles, we can actually estimate approximately what that force was.

Dali hitting the Francis Scott Key Bridge.

The impulse momentum theorem

You can calculate the magnitude of the Dali’s collision force using a fundamental physics principle called the impulse momentum theorem.

The theorem is derived directly from Newton’s second law, which states that force equals mass times acceleration. The impulse momentum theorem adds time to both sides of this equation, to tell you force multiplied by time equals mass multiplied by the change of velocity when the force is applied.

F*∆t = m*∆v.

To calculate the impulse momentum theory for Dali’s collision, multiply its collision force with how long the collision lasted, and equate that with Dali’s mass multiplied by its change in velocity before and after the crash. So, Dali’s collision force has to do with its mass, how long the collision lasted, and how much it slowed down after the crash.

The numbers for Dali’s crash

Dali weighs 257,612,358 pounds or 116,851 metric tonnes when it is fully loaded. It traveled at a speed of 10 miles per hour, or 16.1 kilometers per hour, before the collision; after crashing into the bridge pier, Dali slowed down to 7.8 miles per hour, or 12.6 kilometers per hour.

Another important parameter is the collision time, which refers to the period of time when the ship contacted the bridge during the crash, which caused Dali to suddenly slow.

Nobody knows the exact collision time yet, but based on Dali’s voyage data recorder and the Maryland Transportation Authority Police log, the total collision time was less than four seconds.

For cars crashing on a highway, the collision time is usually only half a second to one second. Dali’s crash looks similar to how a vehicle might crash on a bridge pier, so it makes sense to use the similar collision time duration to estimate the collision force.

Dali’s collision force

With those estimates and the impulse momentum theory, you can get a pretty good idea of what Dali’s collision force probably was.

Dali’s collision force is calculated by taking Dali’s mass and multiplying it by Dali’s velocity change before and after the crash, then dividing all that by the collision time duration. If you assume the collision time is only one second, that gives a collision force of 26,422,562 pounds.

257,612,358 pounds/(32.2 ft/sec²) * (14.7 feet/sec - 11.4 feet/sec⁡) / 1 sec = 26,422,562 pounds.

For reference, the American Association of State Highway and Transportation Officials specifies that the collision force on a highway bridge pier from a truck crash is about 400,000 pounds.

With that said, the cargo ship Dali’s collision force on the Baltimore Key Bridge pier is equivalent to the scenario of 66 heavy trucks driving with a speed of 60 miles per hour (97 km per hour) and hitting the bridge pier simultaneously. This magnitude is far beyond the force that the pier can withstand.

While designing a super robust bridge that can handle this level of collision force would be technically achievable, doing so would dramatically increase the cost of the bridge. Civil engineers are investigating different approaches that would reduce the force put directly on the piers, such as using energy absorbent protection barriers around the piers that dissipate the shock. These sorts of solutions could prevent disasters like this in the future.

The Conversation

Amanda Bao does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

US media coverage of new science less likely to mention researchers with African and East Asian names

$
0
0
Once their research comes out, who will be quoted in the news coverage?gorodenkoff/iStock via Getty Images Plus

When one Chinese national recently petitioned the U.S. Citizenship and Immigration Services to become a permanent resident, he thought his chances were pretty good. As an accomplished biologist, he figured that news articles in top media outlets, including The New York Times, covering his research would demonstrate his “extraordinary ability” in the sciences, as called for by the EB-1A visa.

But when the immigration officers rejected his petition, they noted that his name did not appear anywhere in the news article. News coverage of a paper he co-authored did not directly demonstrate his major contribution to the work.

As this biologist’s close friend, I felt bad for him because I knew how much he had dedicated to the project. He even started the idea as one of his Ph.D. dissertation chapters. But as a scientist who studies topics related to scientific innovation, I understand the immigration officers’ perspective: Research is increasingly done through teamwork, so it’s hard to know individual contributions if a news article reports only the study findings.

This anecdote made me and my colleagues Misha Teplitskiy and David Jurgens curious about what affects journalists’ decisions about which researchers to feature in their news stories.

There’s a lot at stake for a scientist whose name is or isn’t mentioned in journalistic coverage of their work. News media plays a key role in disseminating new scientific findings to the public. The coverage of a particular study brings prestige to its research team and their institutions. The depth and quality of coverage then shapes public perception of who is doing good science and in some cases, as my friend’s story suggests, can affect individual careers.

Do scientists’ social identities, such as ethnicity or race, play a role in this process?

This question is not straightforward to answer. On the one hand, racial bias may exist, given the profound underrepresentation of minorities in U.S. mainstream media. On the other, science journalism is known for its high standard of objective reporting. We decided to investigate this question in a systematic fashion using large-scale observational data.

Chinese or African names received least coverage

My colleagues and I analyzed 223,587 news stories from 2011-2019 from 288 U.S. media outlets reporting on 100,486 scientific papers sourced from Altmetric.com, a website that monitors online posts about research papers. For each paper, we focused on authors with the highest chance of being mentioned: the first author, last author and other designated corresponding authors. We calculated how often the authors were mentioned in the news articles reporting their research.

We used an algorithm with 78% reported accuracy to infer perceived ethnicity from authors’ names. We figured that journalists may rely on such cues in the absence of scientists’ self-reported information. We considered authors with Anglo names – like John Brown or Emily Taylor – as the majority group and then compared the average mention rates across nine broad ethnic groups.

Our methodology does not distinguish Black from white names because many African Americans have Anglo names, such as Michael Jackson. This design is still meaningful because we intended to focus on perceived identity.

We found that the overall chance of a scientist being credited by name in a news story was 40%. Authors with minority ethnicity names, however, were significantly less likely to be mentioned compared with authors with Anglo names. The disparity was most pronounced for authors with East Asian and African names; they were on average mentioned or quoted about 15% less in U.S. science media relative to those with Anglo names.

This association is consistent even after accounting for factors such as geographical location, corresponding author status, authorship position, affiliation rank, author prestige, research topics, journal impact and story length.

And it held across different types of outlets, including publishers of press releases, general interest news and those with content focused on science and technology.

Pragmatic factors and rhetorical choices

Our results don’t directly imply media bias. So what’s going on?

First and foremost, the underrepresentation of scientists with East Asian and African names may be due to pragmatic challenges faced by U.S.-based journalists in interviewing them. Factors like time zone differences for researchers based overseas and actual or perceived English fluency could be at play as a journalist works under deadline to produce the story.

We isolated these factors by focusing on researchers affiliated with American institutions. Among U.S.-based researchers, pragmatic difficulties should be minimized because they’re in the same geographic region as the journalists and they’re likely to be proficient in English, at least in writing. In addition, these scientists would presumably be equally likely to respond to journalists’ interview requests, given that media attention is increasingly valued by U.S. institutions.

Even when we looked just at U.S. institutions, we found significant disparities in mentions and quotations for non-Anglo-named authors, albeit slightly reduced. In particular, East Asian- and African-named authors again experience a 4 to 5 percentage-point drop in mention rates compared with their Anglo-named counterparts. This result suggests that while pragmatic considerations can explain some disparities, they don’t account for all of them.

Video camera points at woman interviewing a man in a tech setting
The scientists that journalists reach out to become the face of the research.brightstars/E+ via Getty Images

We found that journalists were also more likely to substitute institutional affiliations for scientists with African and East Asian names – for instance, writing about “researchers from the University of Michigan.” This institution substitution effect underscores a potential bias in media representation, where scholars with minority ethnicity names may be perceived as less authoritative or deserving of formal recognition.

Reflecting a globalized enterprise

Part of the depth of science news coverage depends on how thoroughly and accurately researchers are portrayed in stories, including whether scientists are mentioned by name and the extent to which their contributions are highlighted via quotes. As science becomes increasingly globalized, with English as its primary language, our study highlights the importance of equitable representation in shaping public discourse and fostering diversity in the scientific community.

While our focus was on the depth of coverage with respect to name credits, we suspect that disparities are even larger at an earlier point in science dissemination, when journalists are selecting which research papers to report. Understanding these disparities is complicated because of decades or even centuries of bias ingrained in the whole science production pipeline, including whose research gets funded, who gets to publish in top journals and who is represented in the scientific workforce itself.

Journalists are picking from a later stage of a process that has a number of inequities built in. Thus, addressing disparities in scientists’ media representation is only one way to foster inclusivity and equality in science. But it’s a step toward sharing innovative scientific knowledge with the public in a more equitable way.

The Conversation

Hao Peng does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Tiny crystals capture millions of years of mountain range history – a geologist excavates the Himalayas with a microscope

$
0
0
This image of a single crystal shows 30 million years of geological history of the Himalayas by tracing its thorium concentration and age.Matthew J. Kohn, CC BY-NC-ND

The Himalayas stand as Earth’s highest mountain range, possibly the highest ever. How did it form? Why is it so tall?

You might think understanding big mountain ranges requires big measurements – perhaps satellite imaging over tens or hundreds of thousands of square miles. Although scientists certainly use satellite data, many of us, including me, study the biggest of mountain ranges by relying on the smallest of measurements in tiny minerals that grew as the mountain range formed.

These minerals are found in metamorphic rocks– rocks transformed by heat, pressure or both. One of the great joys in studying metamorphic rocks lies in microanalysis of their minerals. With measurements on scales smaller than the thickness of a human hair, we can unlock the age and chemical compositions hidden inside tiny crystals to understand processes occurring on a colossal scale.

Measuring radioactive elements

Minerals containing radioactive elements are of special interest because these elements, called parents, decay at known rates to form stable elements, called daughters. By measuring the ratio of parent to daughter, we can determine how old a mineral is.

With microanalysis, we can even measure different ages in different parts of a crystal to determine different growth stages. By linking the chemistry of different zones within a mineral to events in the history of a mountain range, researchers can infer how the mountain range was assembled and how quickly.

Snowcapped mountain rising into a blue sky, with a thin flagpole with prayer flags and a pagoda in the foreground
A snapshot of Annapurna, one mountain in the Himalayan range, taken by the author in 2014.Matthew J. Kohn, CC BY-NC-ND

My research team and I analyzed and imaged a single grain of metamorphic monazite from rocks we collected from the Annapurna region of central Nepal. Though only 0.07 inches (1.75 mm) long, this is a gigantic crystal by geologists standards – roughly 30 times larger than typical monazite crystals. We nicknamed it “Monzilla.”

Using an electron probe microanalyzer, we collected and visualized data on the concentration of thorium – a radioactive element, similar to uranium – in the crystal. Colors show the distribution of thorium, where white and red indicate higher concentrations, while blue and purple indicate lower concentrations. Numbers superimposed on the image represent age in millions of years.

Thorium-lead dating measures the ratio of parent thorium to its daughter lead; this ratio depends on thorium’s decay rate and the age of the crystal. We see two different zones are present in the sample: a roughly 30 million-year-old core with high thorium concentrations and a roughly 10 million-year-old, blobby rim with low thorium concentrations.

What do these ages signify?

As the Indian tectonic plate crunches northward into Asia, rocks are first buried deeply, then thrust southward on huge faults. These faults are presently responsible for some of the most catastrophic earthquakes on our planet. As one example, in 2015, the magnitude 7.8 Gorkha earthquake in central Nepal triggered landslides that obliterated the town of Langtang, where I had worked about a dozen years prior. An estimated 329 people died there, and only 14 survived.

Our chemical analyses of this monazite crystal and nearby samples indicate that these rocks were buried deep underneath thrust faults, causing them to partially melt and form the roughly 30 million-year-old monazite core. About 10 million years ago, the rocks were carried up on a major thrust fault, forming the monazite rim. This data shows that building mountain ranges takes a long time – at least 30 million years, in this case – and that rocks basically cycle through them.

By studying rocks in other locations, we can chart the movement of these thrusts and better understand the origins of the Himalayas.

The Conversation

Matthew J. Kohn receives funding from the National Science Foundation and the Department of Energy.

Fossilized dinosaur eggshells can preserve amino acids, the building blocks of proteins, over millions of years

$
0
0
A dinosaur eggshell cross section, as imaged under fluorescence microscopy. Evan Saitta

As a scientist, lab work can sometimes get monotonous. But in 2017, while a Ph.D. student of paleobiology at the University of Bristol in the U.K., I heard a gleeful exclamation from across the room. Kirsty Penkman, head of the North East Amino Acid Racemization lab at the University of York, had just read the data printed off the chromatograms and was practically jumping up and down.

The instrument had detected telltale signatures of ancient amino acids in eggshell. Amino acids are the building blocks that make up protein sequences in living organisms. But this wasn’t just any eggshell; it was a fossil from a titanosaur, a giant herbivorous dinosaur that lived about 70 million years ago.

A line graph with about 20 sharp peaks
The chromatograph printout that Kirsty Penkman saw. Each peak represents an amino acid detected by the instrument during the analysis.Evan Saitta

Not much organic material survives over millions of years, which limits scientists’ ability to study the biology of extinct organisms compared to modern ones, whose proteins and DNA can be sequenced. As Penkman’s enthusiasm suggested, these amino acids were extraordinary.

In fact, this result came unexpectedly amid our team’s efforts to test claims of near-pristine protein preservation in dinosaur bone. I had brought various fossil bones to Penkman, but the results suggested no original amino acids had been preserved, and that they were even contaminated by microbes from the environment they’d been buried in.

Testing eggshell fossils was not even in our original research plan.

Orphan fossil fragments

However, I had just seen that my colleague Beatrice Demarchi, Penkman and their team had detected short protein sequences in 3.8-million-year-old bird eggshell. I predicted that if dinosaur eggshells didn’t preserve any original proteins, then their bones likely wouldn’t preserve any either, and wanted to see whether that was the case. Luckily, we had a source of dinosaur eggshell.

Around 2000, many eggshell fragments were illegally exported from Argentina into the commercial market. As a fossil-obsessed child, I was even gifted a coin-sized fragment from a U.S. mineral store. Penkman and I tested that fragment, as well as another fragment from a European museum’s gift shop.

These fossil fragments in some ways gained scientific value because they didn’t belong to any museum collections. We didn’t have to worry about damaging them during the analysis. To our surprise, we had stumbled upon a rare opportunity to study ancient organic remains from a dinosaur.

Amino acids in eggshell

Prompted by this initial discovery, our large, international team analyzed more dinosaur eggshells from Argentina, Spain and China, using a wide variety of techniques. Although some eggshells preserved amino acids far better than others, the evidence overall suggested that these molecules were ancient and original, possibly ranging from 66 million to 86 million years old.

During life, proteins that helped to calcify the eggshell became trapped within the mineral crystals. The remaining amino acids we detected, however, consisted of free molecules that had broken off from their protein chains by reactions with water. We only detected a few of the most stable amino acids. Less stable ones were absent, as they had degraded away.

A figure showing the silhouette of a titanosaur, a dinosaur with a long neck and stocky legs, as well as ball and stick models of four amino acids and a chemical reaction where a protein and water turn into free amino acids.
A titanosaur, models of the four preserved amino acids, and the breakdown reaction from a protein sequence into free amino acids.Arthur S. Brum (silhouette), Ben Mills (ball-and-stick models), V8rik (hydrolysis reaction)

The amino acids still preserved were what chemists call racemic. Amino acids can occur in left- or right-handed configurations. Living organisms regulate their amino acids such that they appear almost exclusively in the left-handed configuration. After the organism dies, amino acids can convert between handedness until they reach 50-50 mixtures of both configurations.

A 50-50 mix is known as racemic and suggests that the amino acids had separated from their protein chains very long ago.

A drawing of two hands holding two configurations of a chemical model. The two configurations mirror each other.
Left- and right-handed versions of amino acids. While so-called stereoisomers have the same chemical formula, they’re organized as mirror images of each other.NASA

Calcite, an amino acid archive

Our dinosaur eggshell results showed more extreme degradation than seen in younger fossils of bird eggshell and mollusk shell. Our results also matched those from experiments that expose eggshell to heat in the lab, simulating degradation over thousands or millions of years.

Organisms reinforce these shells with a type of calcium carbonate mineral called calcite. Unlike the calcium phosphate that makes up bone, calcite can act as a closed system by trapping the products of proteins involved in calcification as they break down, including free amino acids separated from protein sequences. This closed system allowed us to observe the amino acids in our analyses.

Bird eggshell is even among the best materials to find preserved protein sequences in fossils, let alone free amino acids. Demarchi’s team has detected short, intact sequences of amino acids still bound in a chain from bird eggshell at least 6.5 million years old.

Other researchers have claimed to have foundmore ancient amino acids, as well as more extreme and less likely claims of preserved protein sequences. But, our study uses a wider range of methods and reports the best signal for stable molecules in a tissue we now know preserves molecules well.

Our dinosaur amino acids might hold the record for the oldest protein-related material yet found for which the evidence is very strong, and the first clear evidence from a Mesozoic dinosaur.

Using calcite to look back in time

The genetic sequence in DNA that is ultimately expressed in proteins provides a source code for organisms that scientists can study. But if only a subset of amino acids are preserved in the fossil, it’s like removing all but five letters from a book – little literary analysis is possible. So, what messages from ancient life might persist in these calcite time capsules?

One biologically informative signal might involve stable isotopes, which are atoms of the same element with different masses. Scientists can look at the stable isotope ratios of carbon, oxygen or nitrogen to learn about their source, such as the animal’s diet. Since eggshell calcite is a closed system, stable isotope ratios in their amino acids are more likely to come directly from the dinosaur, rather than outside contamination.

In future research, our team will use fossils to search even further back in time. Organisms other than egg-laying dinosaurs reinforced their tissues with calcite. For example, marine arthropods called trilobites that lived more than half a billion years ago had calcite in their eyes.

Studying older remains could help scientists understand the molecular changes that happen in fossils over long periods of time. Fossil calcite, Earth’s molecular time capsule, may send faint tales from long-gone life for researchers to better understand their biology.

The orphan eggshells used in our initial analysis found a happy ending. They eventually gained a new home at Argentina’s Museo Provincial Patagonico de Ciencias Naturales, a natural sciences museum in Patagonia, repatriated to the only province known to produce that type of eggshell microscopic structure.

The Conversation

This research was supported by the University of Bristol Bob Savage Memorial Fund and the Leverhulme Trust (PLP-2012-116).

Using research to solve societal problems starts with building connections and making space for young people

$
0
0
One or two or 10 studies won't solve our most complex societal challenges. Big problems require collaborations beyond academia. Orbon Alija/E+ via Getty Images

Often, when scientists do research around a specific societal challenge, they hope their work will help solve that larger problem. Yet translating findings into long-lasting, community-driven solutions is much harder than most expect.

It seems intuitive that scientists studying living organisms, microbes and ecosystems could apply their findings to tackle food shortages, help keep environments healthy and improve human and animal health. But it’s not always that easy. Issues like climate change, renewable energy, public health and migration are complex, making direct solutions challenging to develop and implement.

As a groupofresearchers invested in helping scientists create meaningful impact with their work, we understand problems like these will need experts from different fields and industries to work together.

This means we might need to reevaluate certain aspects of the inquiry process and embrace fresh perspectives if we, as members of the scientific community, want to improve our capacity for producing solutions-oriented research.

Defining use-inspired research

Science does not occur in a vacuum. Factors including funding availability, access to advanced technologies and political or social contexts can influence the kinds of studies that get done. A framework called use-inspired research and engagement, or UIRE, acknowledges this fact.

In use-inspired research, the potential applications of findings for society shape the directions of exploration.

In UIRE, researchers work with members of a community to figure out what questions they should look into. They form partnerships with other stakeholders, including governments, businesses of all scales and nonprofits, to form a collaborative foundation. This way, researchers can tailor investigations from the outset to be useful to and usable by decision-makers.

Translational research, or intentionally grounding scientific exploration in practical applications, isn’t new. Use-inspired research expands on translational research, prioritizing building connections between practitioners and communities.

Translational research and use-inspired research rely on collaborations between researchers and stakeholders outside academia.

In the U.S., the passage of the CHIPS and Science Act in 2022 further codified use-inspired research. The act directed US$280 billion over the next 10 years toward funding scientific inquiry to boost domestic competitiveness, innovation and national security.

This legislation also authorized the establishment of the National Science Foundation’s Directorate for Technology, Innovation and Partnerships, called NSF TIP. TIP marks the agency’s first new directorate in over three decades, created with the aim of sparking the growth of diverse innovation and technology landscapes.

Producing science in partnership

In use-inspired research and engagement, collaboration is a big part of each project from the start, when the researchers are first deciding what to study. These cooperative partnerships continue throughout data collection and analysis. Together, these teams apply the results and develop products, implement behavior changes, or further inform community decision-making.

For example, a large hospital, an academic organization and several nonprofits may partner together to explore issues affecting health care accessibility in the region. Researchers collect data through surveys and interviews, and interpret the findings within the community’s specific circumstances. They can then coordinate data evaluation with the health care and nonprofit partners, which helps take socioeconomic status, cultural beliefs and built infrastructure like grocery stores and public transportation into account.

A small group of medical professionals gather around a table. They are each dressed professionally and have files scattered between them.
Academic researchers can collaborate with places like hospitals and nonprofits to study specific problems facing their community.FatCamera/E+ via Getty Images

This approach brings together the broad perspectives of a large hospital network, academic expertise around survey creation and data analysis, and specialized knowledge held by nonprofits. These groups can then collaborate further to develop specific programs, such as educational initiatives and enhanced health care services. They can tailor these to the needs of the community they serve.

Use-inspired research matters because it looks at all the different issues facing a community holistically and keeps them in mind when investigating potential solutions. UIRE is not a substitute for basic, foundational research, which explores new questions to fundamentally understand a topic. Rather, it’s an approach centered around selecting questions and developing methods based on real-world importance.

UIRE creates a foundation for long-term, inclusive partnerships – and not just within academia. Government, community organizations, large companies and startups can all use the same principles of UIRE to share ideas and craft solutions to issues facing their communities. Individuals from all sorts of backgrounds are equally integral to the entire process, further amplifying the viewpoints present.

Use-inspired methods are not only relevant to improving research outcomes. A use-inspired approach drives innovation and technological advancements across sectors. When used in K-12 classrooms, UIRE leads to well-rounded students.

This approach can also improve learning in workforce development spaces, creating employees trained to build connections.

UIRE provides platforms for the general public to participate in conversations about issues impacting their lives that they may not have otherwise been a part of.

Harnessing early-career engagement

Use-inspired methods challenge not only how, but who contributes to and benefits from scientific inquiry. They also focus on making the findings accessible to those outside academia.

To craft necessary solutions for complex societal problems, institutions will need to continue backing traditional scholars who excel at pure basic research. At the same time, they can support training in use-inspired domains.

Early-career professionals across sectors will continue to play an important role in spreading and sustaining the cultural shifts necessary to embrace use-inspired research at a wider scale. These early-career professionals can bring fresh ideas to the table and craft innovative approaches to problems.

To support translational research long term, institutions and supervisors can support students in hands-on learning opportunities from the first year of undergraduate coursework to postgraduate fellowships. These opportunities can help students learn about UIRE and equip them with the skills needed to build cross-sector partnerships before entering the workforce.

By receiving mentorship from individuals outside academia, students and trainees can gain exposure to different career paths and find motivation to pursue opportunities outside traditional academic roles. This mentorship fosters creative problem-solving and adaptability.

UIRE provides a potential framework to addressing complex societal challenges. Creating opportunities for the ongoing involvement of young people will seed a vibrant future for use-inspired research and engagement.

The Conversation

Zoey England is currently completing a Use-Inspired Research Science Communications fellowship, funded through a grant from the National Science Foundation. She has also received funding from CTNext.

Jennifer Forbey receives funding from the National Science Foundation.

Michael Muszynski receives funding from the National Science Foundation. He is affiliated with the Maize Genetics Cooperation.

Infections after surgery are more likely due to bacteria already on your skin than from microbes in the hospital − new research

$
0
0
Genetic analysis of the bacteria causing surgical site infections revealed that many were already present on the patient's skin.Ruben Bonilla Gonzalo/Moment via Getty Images

Health care providers and patients have traditionally thought that infections patients get while in the hospital are caused by superbugs they’re exposed to while they’re in a medical facility. Genetic data from the bacteria causing these infections – think CSI for E. coli– tells another story: Most health care-associated infections are caused by previously harmless bacteria that patients already had on their bodies before they even entered the hospital.

Research comparing bacteria in the microbiome – those colonizing our noses, skin and other areas of the body – with the bacteria that cause pneumonia, diarrhea, bloodstream infections and surgical site infections shows that the bacteria living innocuously on our own bodies when we’re healthy are most often responsible for these bad infections when we’re sick.

Our newly published research in Science Translational Medicine adds to the growing number of studies supporting this idea. We show that many surgical site infections after spinal surgery are caused by microbes that are already on the patient’s skin.

Surgical infections are a persistent problem

Among the different types of heath care-associated infections, surgical site infections stand out as particularly problematic. A 2013 study found that surgical site infections contribute the most to the annual costs of hospital-acquired infections, totaling over 33% of the US$9.8 billion spent annually. Surgical site infections are also a significant cause of hospital readmission and death after surgery.

In our work as clinicians at Harborview Medical Center at the University of Washington – yes, the one in Seattle that “Grey’s Anatomy” was supposedly based on– we’ve seen how hospitals go to extraordinary lengths to prevent these infections. These include sterilizing all surgical equipment, using ultraviolet light to clean the operating room, following strict protocols for surgical attire and monitoring airflow within the operating room.

Surgeon helping another surgeon put on gloves
Hospitals follow strict protocols to prevent infections resulting from surgical procedures.Morsa Images/DigitalVision via Getty Images

Still, surgical site infections occur following about 1 in 30 procedures, typically with no explanation. While rates of many other medical complications have shown steady improvement over time, data from the Agency for Healthcare Research and Quality and the Centers for Disease Control and Prevention show that the problem of surgical site infection is not getting better.

In fact, because administering antibiotics during surgery is a cornerstone of infection prevention, the global rise of antibiotic resistance is forecast to increase infection rates following surgery.

BYOB (Bring your own bacteria)

As a team of physician-scientists with expertise including critical care, infectious diseases, laboratory medicine, microbiology, pharmacy, orthopedics and neurosurgery, we wanted to better understand how and why surgical infections were occurring in our patients despite following recommended protocols to prevent them.

Prior studies on surgical site infection have been limited to a single species of bacteria and used older genetic analysis methods. But new technologies have opened the door to studying all types of bacteria and testing their antibiotic resistance genes simultaneously.

We focused on infections in spinal surgery for a few reasons. First, similar numbers of women and men undergo spine surgery for various reasons across their life spans, meaning our results would be applicable to a larger group of people. Second, more health care resources are expended on spinal surgery than any other type of surgical procedure in the U.S. Third, infection following spine surgery can be particularly devastating for patients because it often requires repeat surgeries and long courses of antibiotics for a chance at a cure.

Over a one-year period, we sampled the bacteria living in the nose, skin and stool of over 200 patients before surgery. We then followed this group for 90 days to compare those samples with any infections that later occurred.

Microscopy image of clusters of spherical bacteria stained yellow against a green background
Staphylococcus aureus is a common cause of hospital-acquired bacterial infections.CDC/Janice Haney Carr/Jeff Hageman, M.H.S.

Our results revealed that while the species of bacteria living on the back skin of patients vary remarkably between people, there are some clear patterns. Bacteria colonizing the upper back around the neck and shoulders are more similar to those in the nose; those normally present on the lower back are more similar to those in the gut and stool. The relative frequency of their presence in these skin regions closely mirrors how often they show up in infections after surgery on those same specific regions of the spine.

In fact, 86% of the bacteria causing infections after spine surgery were genetically matched to bacteria a patient carried before surgery. That number is remarkably close to estimates from earlier studies using older genetic techniques focused on Staphylococcus aureus.

Nearly 60% of infections were also resistant to the preventive antibiotic administered during surgery, the antiseptic used to clean the skin before incision or both. It turns out the source of this antibiotic resistance was also not acquired in the hospital but from microbes the patient had already been living with unknowingly. They likely acquired these antibiotic-resistant microbes through prior antibiotic exposure, consumer products or routine community contact.

Preventing surgical infections

At face value, our results may seem intuitive – surgical wound infections come from bacteria that hang out around that part of the body. But this realization has some potentially powerful implications for prevention and care.

If the most likely source of surgical infection – the patient’s microbiome – is known in advance, this presents medical teams with an opportunity to protect against it prior to a scheduled procedure. Current protocols for infection prevention, such as antibiotics or topical antiseptics, follow a one-size-fits-all model – for example, the antibiotic cefazolin is used for any patient undergoing most procedures– but personalization could make them more effective.

If you were having a major surgery today, no one would know whether the site where your incision will be made was colonized with bacteria resistant to the standard antibiotic regimen for that procedure. In the future, clinicians could use information about your microbiome to select more targeted antimicrobials. But more research is needed on how to interpret that information and understand whether such an approach would ultimately lead to better outcomes.

Today, practice guidelines, commercial product development, hospital protocols and accreditation related to infection prevention are often focused on sterility of the physical environment. The fact that most infections don’t actually start with sources in the hospital is probably a testament to the efficacy of these protocols. But we believe that shifting toward more patient-centered, individualized approaches to infection prevention has the potential to benefit hospitals and patients alike.

The Conversation

Dustin Long receives funding from the National Institutes of Health.

Dr Bryson-Cahn receives funding from the Gordon and Berry Moore Foundation and is the co-medical director for Alaska Airlines.

Newly discovered genetic variant that causes Parkinson’s disease clarifies why the condition develops and how to halt it

$
0
0
Multiple gene variants are linked to Parkinson's disease, but which ones are the most relevant?dra_schwartz/E+ via Getty Images

Parkinson’s disease is a neurodegenerative movement disorder that progresses relentlessly. It gradually impairs a person’s ability to function until they ultimately become immobile and often develop dementia. In the U.S. alone, over a million people are afflicted with Parkinson’s, and new cases and overall numbers are steadily increasing.

There is currently no treatment to slow or halt Parkinson’s disease. Available drugs don’t slow disease progression and can treat only certain symptoms. Medications that work early in the disease, however, such as Levodopa, generally become ineffective over the years, necessitating increased doses that can lead to disabling side effects. Without understanding the fundamental molecular cause of Parkinson’s, it’s improbable that researchers will be able to develop a medication to stop the disease from steadily worsening in patients.

Many factors may contribute to the development of Parkinson’s, both environmental and genetic. Until recently, underlying genetic causes of the disease were unknown. Most cases of Parkinson’s aren’t inherited but sporadic, and early studies suggested a genetic basis was improbable.

Nevertheless, everything in biology has a genetic foundation. As a geneticist and molecular neuroscientist, I have devoted my career to predicting and preventing Parkinson’s disease. In our newly published research, my team and I discovered a new genetic variant linked to Parkinson’s that sheds light on the evolutionary origin of multiple forms of familial parkinsonism, opening doors to better understand and treat the disease.

Genetic linkages and associations

In the mid-1990s, researchers started looking into whether genetic differences between people with or without Parkinson’s might identify specific genes or genetic variants that cause the disease. In general, I and other geneticists use two approaches to map the genetic blueprint of Parkinson’s: linkage analysis and association studies.

Linkage analysis focuses on rare families where parkinsonism, or neurological conditions with similar symptoms to Parkinson’s, is passed down. This technique looks for cases where a disease-causing version of the gene and Parkinson’s appear to be passed down in the same person. It requires information on your family tree, clinical data and DNA samples. Relatively few families, such as those with more than two living, affected relatives willing to participate, are needed to expedite new genetic discoveries.

“Linkage” between a pathogenic genetic variant and disease development is so significant that it can inform a diagnosis. It has also become the basis of many lab models used to study the consequences of gene dysfunction and how to fix it. Linkage studies, like the one my team and I published, have identifiedpathogenic mutationsin over 20 genes. Notably, many patients in families with parkinsonism have symptoms that are indistinguishable from typical, late-onset Parkinson’s. Nevertheless, what causes inherited parkinsonism, which typically affects people with earlier-onset disease, may not be the cause of Parkinson’s in the general population.

Genome-wide association studies examine genetic data across a large sample of people.

Conversely, genome-wide association studies, or GWAS, compare genetic data from patients with Parkinson’s with unrelated people of the same age, gender and ethnicity who don’t have the disease. Typically, this involves assessing how frequently in both groups over 2 million common gene variants appear. Because these studies require analyzing so many gene variants, researchers need to gather clinical data and DNA samples from over 100,000 people.

Although costly and time-consuming, the findings of genome-wide association studies are widely applicable. Combining the data of these studies has identified many locations in the genome that contribute to the risk of developing Parkinson’s. Currently, there are over 92 locations in the genome that contain about 350 genes potentially involved in the disease. However, GWAS locations can be considered only in aggregate; individual results are not helpful in diagnosis nor in disease modeling, as the contribution of these individual genes to disease risk is so minimal.

Together, “linked” and “associated” discoveries imply a number of molecular pathways are involved in Parkinson’s. Each identified gene and the proteins they encode typically can have more than one effect. The functions of each gene and protein may also vary by cell type. The question is which gene variants, functions and pathways are most relevant to Parkinson’s? How do researchers meaningfully connect this data?

Parkinson’s disease genes

Using linkage analysis, my team and I identified a new genetic mutation for Parkinson’s disease called RAB32 Ser71Arg. This mutation was linked to parkinsonism in three families and found in 13 other people in several countries, including Canada, France, Germany, Italy, Poland, Turkey, Tunisia, the U.S. and the U.K.

Although the affected individuals and families originate from many parts of the world, they share an identical fragment of chromosome 6 that contains RAB32 Ser71Arg. This suggests these patients are all related to the same person; ancestrally, they are distant cousins. It also suggests there are many more cousins to identify.

With further analysis, we found RAB32 Ser71Arg interacts with several proteins previously linked to early- and late-onset parkinsonism as well as nonfamilialParkinson’s disease. The RAB32 Ser71Arg variant also causes similar dysfunctionwithin cells.

Together, the proteins encoded by these linked genes optimize levels of the neurotransmitter dopamine. Dopamine is lost in Parkinson’s as the cells that produce it progressively die. Together, these linked genes and the proteins they encode regulatespecializedautophagyprocesses. In addition, these encoded proteins enable immunity within cells.

Such linked genes support the idea that these causes of inherited parkinsonism evolved to improve survival in early life because they enhance immune response to pathogens. RAB32 Ser71Arg suggest how and why many mutations have originated, despite creating a susceptible genetic background for Parkinson’s in later life.

RAB32 Ser71Arg is the first linked gene researchers have identified that directly connects the dots between prior linked discoveries. The proteins encoded bring together three important functions of the cell: autophagy, immunity and mitochondrial function. While autophagy releases energy stored in the cell’s trash, this needs to be coordinated with another specialized component within the cell, mitochondria, that are the major supplier of energy. Mitochondria also help to control cell immunity because they evolved from bacteria the cell’s immune system recognizes as “self” rather than as an invading pathogen to destroy.

Identifying subtle genetic differences

Finding the molecular blueprint for familial Parkinson’s is the first step to fixing the faulty mechanisms behind the disease. Like the owner’s manual to your car’s engine, it provides a practical guide of what to check when the motor fails.

Just as each make of motor is subtly different, what makes each person genetically susceptible to nonfamilial Parkinson’s disease is also subtly different. However, analyzing genetic data can now test for types of dysfunction in the cell that are hallmarks of Parkinson’s disease. This will help researchers identify environmental factors that influence the risk of developing Parkinson’s, as well as medications that may help protect against the disease.

More patients and families participating in genetic research are needed to find additional components of the engine behind Parkinson’s. Each person’s genome has about 27 million variants of the 6 billion building blocks that make up their genes. There are many more genetic components for Parkinson’s that have yet to be found.

As our discovery illustrates, each new gene that researchers identify can profoundly improve our ability to predict and prevent Parkinson’s.

The Conversation

Matthew Farrer has US patents associated with LRRK2 mutations and associated mouse models (8409809 and 8455243), and methods of treating neurodegenerative disease (20110092565). He has previously received support from Mayo Foundation, GlaxoSmithKline, and NIH (NINDS P50 NS40256; NINDS R21 NS064885; 2005–2009), the Canada Excellence Research Chairs program (CIHR/IRSC 275675, 2010–17), the Weston Foundation and the Michael J Fox Foundation. His work has also been supported by the Dr. Don Rix BC Leadership Chair in Genetic Medicine (2011–2019) and most recently, by the Lee and Lauren Fixel Chair (2019-2024).


Personalized cancer treatments based on testing drugs quickly leads to faster treatment, better outcomes

$
0
0
Identifying the most effective cancer treatment for a given patient from the get-go can help improve outcomes.Leslie Lauren/iStock via Getty Images Plus

Despite many efforts to find better, more effective ways to treat cancer, it remains a leading cause of death by disease among children in the U.S.

Cancer patients are also getting younger. Cancer diagnoses among those under 50 has risen by about 80% worldwide over the past 30 years. As of 2023, cancer is the second-leading cause of death both in the U.S. and around the world. While death rates from cancer have decreased over the past few decades, about 1 in 3 patients in the U.S. and 1 in 2 patients worldwide still die from cancer.

Despite advances in standard cancer treatments, many cancer patients still face uncertain outcomes when these treatments prove ineffective. Depending on the stage and location of the cancer and the patient’s medical history, most cancer types are treated with a mix of radiation, surgery and drugs. But if those standard treatments fail, patients and doctors enter a trial-and-error maze where effective treatments become difficult to predict because of limited information on the patient’s cancer.

My mission as a cancer researcher is to build a personalized guide of the most effective drugs for every cancer patient. My team and I do this by testing different medications on a patient’s own cancer cells before administering treatment, tailoring therapies that are most likely to selectively kill tumors while minimizing toxic effects.

In our newly published results of the first clinical trial combining drug sensitivity testing with DNA testing to identify effective treatments in children with cancer, an approach called functional precision medicine, we found this approach can help match patients with more FDA-approved treatment options and significantly improve outcomes.

What is functional precision medicine?

Even though two people with the same cancer might get the same medicine, they can have very different outcomes. Because each patient’s tumor is unique, it can be challenging to know which treatment works best.

To solve this problem, doctors analyze DNA mutations in the patient’s tumor, blood or saliva to match cancer medicines to patients. This approach is called precision medicine. However, the relationship between cancer DNA and how effective medicines will be against them is very complex. Matching medications to patients based on a single mutation overlooks other genetic and nongenetic mechanisms that influence how cells respond to drugs.

Functional precision medicine involves testing drugs on tumor samples to see which ones work best.

How to best match medicines to patients through DNA is still a major challenge. Overall, only 10% of cancer patientsexperience a clinical benefit from treatments matched to tumor DNA mutations.

Functional precision medicine takes a different approach to personalizing treatments. My team and I take a sample of a patient’s cancer cells from a biopsy, grow the cells in the lab and expose them to over 100 drugs approved by the Food and Drug Administration. In this process, called drug sensitivity testing, we look for the medications that kill the cancer cells.

New clinical trial results

Providing functional precision medicine to cancer patients in real life is very challenging. Off-label use of drugs and financial restrictions are key barriers. The health of cancer patients can also deteriorate rapidly, and physicians may be hesitant to try new methods.

But this is starting to change. Two teams in Europe recently showed that functional precision medicine could match effective treatments to about 55% ofadult patients with blood cancers such as leukemia and lymphoma that did not respond to standard treatments.

Most recently, my team’s clinical trial focused on childhood cancer patients whose cancer came back or didn’t respond to treatment. We applied our functional precision medicine approach to 25 patients with different types of cancer.

Child's hand with IV placed in wrist holding hand of person wearing white coat, both hovering over a stethoscope on a bed
Researchers are testing a functional precision medicine approach to cancer treatment in both children and adults.Pornpak Khunatorn/iStock via Getty Images Plus

Our trial showed that we could provide treatment options for almost all patients in less than two weeks. My colleague Arlet Maria Acanda de la Rocha was instrumental in helping return drug sensitivity data to patients as fast as possible. We were able to provide test results within 10 days of receiving a sample, compared with the roughly 30 days that standard genomic testing results that focus on identifying specific cancer mutations typically take to process.

Most importantly, our study showed that 83% of cancer patients who received treatments guided by our approach had clinical benefit, including improved response and survival.

Expanding into the real world

Functional precision medicine opens new paths to understanding how cancer drugs can be better matched to patients. Although doctors can read any patient’s DNA today, interpreting the results to understand how a patient will respond to cancer treatment is much more challenging. Combining drug sensitivity testing with DNA analysis can help personalize cancer treatments for each patient.

I, along with colleague Noah E. Berlow, have started to add artificial intelligence to our functional precision medicine program. AI enables us to analyze each patient’s data to better match them with tailored treatments and drug combinations. AI also allows us to understand the complex relationships between DNA mutations within tumors and how different treatments will affect them.

My team and I have started twoclinical trials to expand the results of our previous studies on providing treatment recommendations through functional precision medicine. We’re recruiting a larger cohort of adults and children with cancers that have come back or are resistant to treatment.

The more data we have, the easier it will become to understand how to best treat cancer and ultimately help more patients access personalized cancer treatments.

The Conversation

Diana Azzam is a co-founder and owns shares in First Ascent Biomedical. She receives funding from Florida Department of Health and National Institute of Health.

A young Black scientist discovered a pivotal leprosy treatment in the 1920s − but an older colleague took the credit

$
0
0
The island of Molokai, where the Ball Method successfully treated leprosy sufferers.Albert Pierce Taylor

Hansen’s disease, also called leprosy, is treatable today – and that’s partly thanks to a curious tree and the work of a pioneering young scientist in the 1920s. Centuries prior to her discovery, sufferers had no remedy for leprosy’s debilitating symptoms or its social stigma.

This young scientist, Alice Ball, laid fundamental groundwork for the first effective leprosy treatment globally. But her legacy still prompts conversations about the marginalization of women and people of color in science today.

As a bioethicist and historian of medicine, I’ve studied Ball’s contributions to medicine, and I’m pleased to see her receive increasing recognition for her work, especially on a disease that remains stigmatized.

Who was Alice Ball?

Alice Augusta Ball, born in Seattle, Washington, in 1892, became the first woman and first African American to earn a master’s degree in science from the College of Hawaii in 1915, after completing her studies in pharmaceutical chemistry the year prior.

A black and white photo of Alice Ball, wearing a graduation cap and robes.
Alice Augusta Ball, who came up with The Ball Method, a treatment for leprosy that didn’t come with unmanageable side effects.

After she finished her master’s degree, the college hired her as a research chemist and instructor, and she became the first African American with that title in the chemistry department.

Impressed by her master’s thesis on the chemistry of the kava plant, Dr. Harry Hollmann with the Leprosy Investigation Station of the U.S. Public Health Service in Hawaii recruited Ball. At the time, leprosy was a major public health issue in Hawaii.

Doctors now understand that leprosy, also called Hansen’s disease, is minimally contagious. But in 1865, the fear and stigma associated with leprosy led authorities in Hawaii to implement a mandatory segregation policy, which ultimately isolated those with the disease on a remote peninsula on the island of Molokai. In 1910, over 600 leprosy sufferers were living in Molokai.

This policy overwhelmingly affected Native Hawaiians, who accounted for over 90% of all those exiled to Molokai.

The significance of chaulmoogra oil

Doctors had attempted to use nearly every remedy imaginable to treat leprosy, even experimenting with dangerous substances such as arsenicand strychnine. But the lone consistently effective treatment was chaulmoogra oil.

Chaulmoogra oil is derived from the seeds of the chaulmoogra tree. Health practitioners in India and Burma had been using this oil for centuries as a treatment for various skin diseases. But there were limitations with the treatment, and it had only marginal effects on leprosy.

The oil is very thick and sticky, which makes it hard to rub into the skin. The drug is also notoriously bitter, and patients who ingested it would often start vomiting. Some physicians experimented with injections of the oil, but this produced painful pustules.

A black and white photo of a woman poking a needle into a child's wrist, with two other women in the background watching.
Dr. Isabel Kerr, a European missionary, administering to a patient a chaulmoogra oil treatment in 1915, prior to the invention of the Ball Method.George McGlashan Kerr, CC BY

The Ball Method

If researchers could harness chaulmoogra’s curative potential without the nasty side effects, the tree’s seeds could revolutionize leprosy treatment. So, Hollmann turned to Ball. In a 1922 article, Hollmann documents how the 23-year-old Ball discovered how to chemically adapt chaulmoogra into an injection that had none of the side effects.

The Ball Method, as Hollmann called her discovery, transformed chaulmoogra oil into the most effective treatment for leprosy until the introduction of sulfones in the late 1940s.

In 1920, the Ball Method successfully treated 78 patients in Honolulu. A year later, it treated 94 more, with the Public Health Service noting that the morale of all the patients drastically improved. For the first time, there was hope for a cure.

Tragically, Ball did not have the opportunity to revel in this achievement, as she passed away within a year at only 24, likely from exposure to chlorine gas in the lab.

Ball’s legacy, lost and found

Ball’s death meant she didn’t have the opportunity to publish her research. Arthur Dean, chair of the College of Hawaii’s chemistry department, took over the project.

Dean mass-produced the treatment and published a series of articles on chaulmoogra oil. He renamed Ball’s method the “Dean Method,” and he never credited Ball for her work.

Ball’s other colleagues did attempt to protect Ball’s legacy. A 1920 article in the Journal of the American Medical Association praises the Ball Method, while Hollmann clearly credits Ball in his own 1922 article.

Ball is described at length in a 1922 article in volume 15, issue 5, of Current History, an academic publication on international affairs. That feature is excerpted in a June 1941 issue of Carter G. Woodson’s “Negro History Bulletin,” referring to Ball’s achievement and untimely death.

Joseph Dutton, a well-regarded religious volunteer at the leprosy settlements on Molokai, further referenced Ball’s work in a 1932 memoir broadly published for a popular audience.

Historians such as Paul Wermager later prompted a modern reckoning with Ball’s poor treatment by Dean and others, ensuring that Ball received proper credit for her work. Following Wermager’s and others’ work, the University of Hawaii honored Ball in 2000 with a bronze plaque, affixed to the last remaining chaulmoogra tree on campus.

In 2019, the London School of Hygiene and Tropical Medicine added Ball’s name to the outside of its building. Ball’s story was even featured in a 2020 short film, “The Ball Method.”

The Ball Method represents both a scientific achievement and a history of marginalization. A young woman of color pioneered a medical treatment for a highly stigmatizing disease that disproportionately affected an already disenfranchised Indigenous population.

The state of Hawaii honored Ball by declaring Feb. 28 Alice Augusta Ball Day.

In 2022, then-Gov. David Ige declared Feb. 28 Alice Augusta Ball Day in Hawaii. It was only fitting that the ceremony took place on the Mānoa campus in the shade of the chaulmoogra tree.

The Conversation

Mark M. Lambert does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The hidden risk of letting AI decide – losing the skills to choose for ourselves

$
0
0

As artificial intelligence creeps further into people’s daily lives, so do worries about it. At the most alarmist are concerns about AI going rogue and terminating its human masters.

But behind the calls for a pause on the development of AI is a suite of more tangible social ills. Among them are the risks AI poses to people’s privacy and dignity and the inevitable fact that, because the algorithms under AI’s hood are programmed by humans, it is just as biased and discriminatory as many of us. Throw in the lack of transparency about how AI is designed, and by whom, and it’s easy to understand why so much time these days is devoted to debating its risks as much as its potential.

But my own research as a psychologist who studies how people make decisions leads me to believe that all these risks are overshadowed by an even more corrupting, though largely invisible, threat. That is, AI is mere keystrokes away from making people even less disciplined and skilled when it comes to thoughtful decisions.

Making thoughtful decisions

The process of making thoughtful decisions involves three common sense steps that begin with taking time to understand the task or problem you’re confronted with. Ask yourself, what is it that you need to know, and what do you need to do in order to make a decision that you’ll be able to credibly and confidently defend later?

The answers to these questions hinge on actively seeking out information that both fills gaps in your knowledge and challenges your prior beliefs and assumptions. In fact, it’s this counterfactual information– alternative possibilities that emerge when people unburden themselves of certain assumptions – that ultimately equips you to defend your decisions when they are criticized.

Thoughtful decisions involve considering your values and weighing trade-offs.

The second step is seeking out and considering more than one option at a time. Want to improve your quality of life? Whether it’s who you vote for, the jobs you accept or the things you buy, there’s always more than one road that will get you there. Expending the effort to actively consider and rate at least a few plausible options, and in a manner that is honest about the trade-offs you are willing to make across their pros and cons, is a hallmark of a thoughtful and defensible choice.

The third step is being willing to delay closure on a decision until after you’ve done all the necessary heavy mental lifting. It’s no secret: Closure feels good because it means you’ve put a difficult or important decision behind you. But the cost of moving on prematurely can be much higher than taking the time to do your homework. If you don’t believe me, just think about all those times you let your feelings guide you, only to experience regret because you didn’t take the time to think a little harder.

Dangers of outsourcing decisions to AI

None of these three steps are terribly difficult to take. But, for most, they’re not intuitive either. Making thoughtful and defensible decisions requires practice and self-discipline. And this is where the hidden harm that AI exposes people to comes in: AI does most of its “thinking” behind the scenes and presents users with answers that are stripped of context and deliberation. Worse, AI robs people of the opportunity to practice the process of making thoughtful and defensible decisions on their own.

Consider how people approach many important decisions today. Humans are well known for being prone to a wide range of biases because we tend to be frugal when it comes to expending mental energy. This frugality leads people to like it when seemingly good or trustworthy decisions are made for them. And we are social animals who tend to value the security and acceptance of their communities more than they might value their own autonomy.

Add AI to the mix and the result is a dangerous feedback loop: The data that AI is mining to fuel its algorithms is made up of people’s biased decisions that also reflect the pressure of conformity instead of the wisdom of critical reasoning. But because people like having decisions made for them, they tend to accept these bad decisions and move on to the next one. In the end, neither we nor AI end up the wiser.

Being thoughtful in the age of AI

It would be wrongheaded to argue that AI won’t offer any benefits to society. It most likely will, especially in fields like cybersecurity, health care and finance, where complex models and massive amounts of data need to be analyzed routinely and quickly. However, most of our day-to-day decisions don’t require this kind of analytic horsepower.

But whether we asked for it or not, many of us have already received advice from – and work performed by – AI in settings ranging from entertainment and travel to schoolwork, health care and finance. And designers are hard at work on next-generation AI that will be able to automate even more of our daily decisions. And this, in my view, is dangerous.

In a world where what and how people think is already under siege thanks to the algorithms of social media, we risk putting ourselves in an even more perilous position if we allow AI to reach a level of sophistication where it can make all kinds of decisions on our behalf. Indeed, we owe it to ourselves to resist the siren’s call of AI and take back ownership of the true privilege – and responsibility – of being human: being able to think and choose for ourselves. We’ll feel better and, importantly, be better if we do.

The Conversation

Joe Árvai does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

From thousands to millions to billions to trillions to quadrillions and beyond: Do numbers ever end?

$
0
0
The number zero was a relatively recent and crucial addition − it allows numbers to extend in both directions forever.pixel_dreams/iStock via Getty Images Plus

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to curiouskidsus@theconversation.com.


Why don’t numbers end? – Reyhane, age 7, Tehran, Iran


Here’s a game: Ask a friend to give you any number and you’ll return one that’s bigger. Just add “1” to whatever number they come up with and you’re sure to win.

The reason is that numbers go on forever. There is no highest number. But why? As a professor of mathematics, I can help you find an answer.

First, you need to understand what numbers are and where they come from. You learned about numbers because they enabled you to count. Early humans had similar needs– whether to count animals killed in a hunt or keep track of how many days had passed. That’s why they invented numbers.

But back then, numbers were quite limited and had a very simple form. Often, the “numbers” were just notches on a bone, going up to a couple hundred at most.

How numbers evolved throughout the centuries.

When numbers got bigger

As time went on, people’s needs grew. Herds of livestock had to be counted, goods and services traded, and measurements made for buildings and navigation. This led to the invention of larger numbers and better ways of representing them.

About 5,000 years ago, the Egyptians began using symbols for various numbers, with a final symbol for one million. Since they didn’t usually encounter bigger quantities, they also used this same final symbol to depict “many.”

The Greeks, starting with Pythagoras, were the first to study numbers for their own sake, rather than viewing them as just counting tools. As someone who’s written a book on the importance of numbers, I can’t emphasize enough how crucial this step was for humanity.

By 500 BCE, Pythagoras and his disciples had not only realized that the counting numbers – 1, 2, 3 and so on – were endless, but also that they could be used to explain cool stuff like the sounds made when you pluck a taut string.

Zero is a critical number

But there was a problem. Although the Greeks could mentally think of very large numbers, they had difficulty writing them down. This was because they did not know about the number 0.

Think of how important zero is in expressing big numbers. You can start with 1, then add more and more zeroes at the end to quickly get numbers like a million – 1,000,000, or 1 followed by six zeros – or a billion, with nine zeros, or a trillion, 12 zeros.

It was only around 1200 CE that zero, invented centuries earlier in India, came to Europe. This led to the way we write numbers today.

This brief history makes clear that numbers were developed over thousands of years. And though the Egyptians didn’t have much use for a million, we certainly do. Economists will tell you that government expenditures are commonly measured in millions of dollars.

Also, science has taken us to a point where we need even larger numbers. For instance, there are about 100 billion stars in our galaxy– or 100,000,000,000 – and the number of atoms in our universe may be as high as 1 followed by 82 zeros.

Don’t worry if you find it hard to picture such big numbers. It’s fine to just think of them as “many,” much like the Egyptians treated numbers over a million. These examples point to one reason why numbers must continue endlessly. If we had a maximum, some new use or discovery would surely make us exceed it.

The symbols of math include +, -, x and =.

Exceptions to the rule

But under certain circumstances, sometimes numbers do have a maximum because people design them that way for a practical purpose.

A good example is a clock – or clock arithmetic, where we use only the numbers 1 through 12. There is no 13 o’clock, because after 12 o’clock we just go back to 1 o’clock again. If you played the “bigger number” game with a friend in clock arithmetic, you’d lose if they chose the number 12.

Since numbers are a human invention, how do we construct them so they continue without end? Mathematicians started looking at this question starting in the early 1900s. What they came up with was based on two assumptions: that 0 is the starting number, and when you add 1 to any number you always get a new number.

These assumptions immediately give us the list of counting numbers: 0 + 1 = 1, 1 + 1 = 2, 2 + 1 = 3, and so on, a progression that continues without end.

You might wonder why these two rules are assumptions. The reason for the first one is that we don’t really know how to define the number 0. For example: Is “0” the same as “nothing,” and if so, what exactly is meant by “nothing”?

The second might seem even more strange. After all, we can easily show that adding 1 to 2 gives us the new number 3, just like adding 1 to 2002 gives us the new number 2003.

But notice that we’re saying this has to hold for any number. We can’t very well verify this for every single case, since there are going to be an endless number of cases. As humans who can perform only a limited number of steps, we have to be careful anytime we make claims about an endless process. And mathematicians, in particular, refuse to take anything for granted.

Here, then, is the answer to why numbers don’t end: It’s because of the way in which we define them.

Now, the negative numbers

How do the negative numbers -1, -2, -3 and more fit into all this? Historically, people were very suspicious about such numbers, since it’s hard to picture a “minus one” apple or orange. As late as 1796, math textbooks warned against using negatives.

The negatives were created to address a calculation issue. The positive numbers are fine when you’re adding them together. But when you get to subtraction, they can’t handle differences like 1 minus 2, or 2 minus 4. If you want to be able to subtract numbers at will, you need negative numbers too.

A simple way to create negatives is to imagine all the numbers – 0, 1, 2, 3 and the rest – drawn equally spaced on a straight line. Now imagine a mirror placed at 0. Then define -1 to be the reflection of +1 on the line, -2 to be the reflection of +2, and so on. You’ll end up with all the negative numbers this way.

As a bonus, you’ll also know that since there are just as many negatives as there are positives, the negative numbers must also go on without end!


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

The Conversation

Manil Suri does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Human brains and fruit fly brains are built similarly – visualizing how helps researchers better understand how both work

$
0
0
Stepping through the brain reveals essential information about its structure and function.Scaplen et al. 2021/eLife, CC BY

The human brain contains approximately 87 billion neurons. On average, each of these cells make thousands of different connections to facilitate communication across the brain. Neural communication is thought to underlie all brain functions – from experiencing and interpreting the world around you to remembering those experiences and controlling how your body responds.

But in this vast network of neural communication, precisely who is talking to whom, and what is the consequence of those individual conversations?

Understanding the details surrounding neural communication and how it’s shaped by experience is one of the many focuses of neuroscience. However, this is complicated by the sheer number of microscopic connections there are to study in the human brain, many of which are often in flux, and that available tools are unable to provide adequate resolution.

As a consequence, many scientists like me have turned to simpler organisms, such as the fruit fly.

Figure of 15 microscopy images of a fruit fly brain, labeled blue, magenta, green.
This figure shows connections between different mushroom body neurons.Scaplen et al. 2021/eLife, CC BY

Fruit flies, though pesky in the kitchen, are invaluable in the laboratory. Their brains are built in remarkably similar ways to those of humans. Importantly, scientists have developed tools that make fly brains significantly easier to study with a resolution that hasn’t been achieved in other organisms.

My colleague Gilad Barnea, a neuroscientist at Brown University, and his team spent over 20 years developing a tool to visualize all of the microscopic connections between neurons within the brain.

Neurons communicate with each other by sending and receiving molecules called neurotransmitters between receptor proteins on their surface. Barnea’s tool, trans-Tango, translates the activation of specific receptor proteins into gene expression that ultimately allow for visualization.

My team and I used trans-Tango to visualize all the neural connections of a learning and memory center, called the mushroom body, in the fruit fly brain.

GIF of black square that gradually reveals flickering green and red swatches surrounded then swallowed by dark blue in a roughly oblong shape
The video starts close to the face of the fly and moves back, using genetics to express different proteins within neurons to visualize them. Green indicates the neuron of interest, red indicates the neuron it talks to and blue indicates all other brain cells.Kristin Scaplen, CC BY-SA

Here, a cluster of approximately four neurons, labeled green, receive messages from the mushroom body, which is the L-shaped structure labeled blue in the center of the fly brain. You can step through the brain and see all the other neurons they likely communicate with, labeled red. The cell bodies of the neurons reside on the edges of the brain, and the locations where they receive messages from the mushroom body appear as green tangles invading a small oval compartment. Where these weblike green extensions mingle with red are thought to be where neurons communicate their processed message to other downstream neurons.

Stepping further into the brain, you can see the downstream neurons navigating to a single layer of a fan-shaped structure within the brain. This fan-shaped body is thought to modulate many functions, including arousal, memory storage, locomotion and transforming sensory experiences into actions.

Not only did our images reveal previously unknown connections across the brain, but it also provides an opportunity to explore the consequences of those individual neural conversations. Fly brain connections were remarkably consistent but also varied slightly from one fly to another. These slight variations in connectivity are likely influenced by the fly’s individual experiences, just like they are in people.

The beauty of trans-Tango lies in its flexibility. In addition to visualizing connections, scientists can use genes to manipulate neural activity and better understand how neural communication affects behavior. Because fly brains are similarly built to those of humans, researchers can use them to study how brain connections function and how they might be disrupted in disease. Ultimately, this will improve our understanding of our own brains and the human condition.

The Conversation

Kristin Scaplen receives funding from the Rhode Island Institutional Development Award (IDeA) Network of Biomedical Research Excellence from the National Institute of General Medical Sciences of the National Institutes of Health under grant number P20GM103430. She is affiliated with Brain Waves RI, a non-profit organization aimed to make brain science accessible and fun to everyone.

Exploding stars send out powerful bursts of energy − I’m leading a citizen scientist project to classify and learn about these bright flashes

$
0
0
Gamma-ray bursts, as shown in this illustration, come from powerful astronomical events. NASA, ESA and M. Kornmesser

When faraway stars explode, they send out flashes of energy called gamma-ray bursts that are bright enough that telescopes back on Earth can detect them. Studying these pulses, which can also come from mergers of some exotic astronomical objects such as black holes and neutron stars, can help astronomers like me understand the history of the universe.

Space telescopes detect on average one gamma-ray burst per day, adding to thousands of bursts detected throughout the years, and a community of volunteers are making research into these bursts possible.

On Nov. 20, 2004, NASA launched the Neil Gehrels Swift Observatory, also known as Swift. Swift is a multiwavelength space telescope that scientists are using to find out more about these mysterious gamma-ray flashes from the universe.

Gamma-ray bursts usually last for only a very short time, from a few seconds to a few minutes, and the majority of their emission is in the form of gamma rays, which are part of the light spectrum that our eyes cannot see. Gamma rays contain a lot of energy and can damage human tissues and DNA.

Fortunately, Earth’s atmosphere blocks most gamma rays from space, but that also means the only way to observe gamma-ray bursts is through a space telescope like Swift. Throughout its 19 years of observations, Swift has observed over 1,600 gamma-ray bursts. The information it collects from these bursts helps astronomers back on the ground measure the distances to these objects.

A cylindrical spacecraft, with two flat solar panels, one on each side.
NASA’s Swift observatory, which detects gamma rays.NASA E/PO, Sonoma State University/Aurore Simonnet

Looking back in time

The data from Swift and other observatories has taught astronomers that gamma-ray bursts are one of the most powerful explosions in the universe. They’re so bright that space telescopes like Swift can detect them from across the entire universe.

In fact, gamma-ray bursts are among one of the farthest astrophysical objects observed by telescopes.

Because light travels at a finite speed, astronomers are effectively looking back in time as they look farther into the universe.

The farthest gamma-ray burst ever observed occurred so far away that its light took 13 billion years to reach Earth. So when telescopes took pictures of that gamma-ray burst, they observed the event as it looked 13 billion years ago.

Gamma-ray bursts allow astronomers to learn about the history of the universe, including how the birth rate and the mass of the stars change over time.

Types of gamma-ray bursts

Astronomers now know that there are basically two kinds of gamma-ray bursts– long and short. They are classified by how long their pulses last. The long gamma-ray bursts have pulses longer than two seconds, and at least some of these events are related to supernovae – exploding stars.

When a massive star, or a star that is at least eight times more massive than our Sun, runs out of fuel, it will explode as a supernova and collapse into either a neutron star or a black hole.

Both neutron stars and black holes are extremely compact. If you shrank the entire Sun into a diameter of about 12 miles, or the size of Manhattan, it would be as dense as a neutron star.

Some particularly massive stars can also launch jets of light when they explode. These jets are concentrated beams of light powered by structured magnetic fields and charged particles. When these jets are pointed toward Earth, telescopes like Swift will detect a gamma-ray burst.

Gamma-ray burst emission.

On the other hand, short gamma-ray bursts have pulses shorter than two seconds. Astronomers suspect that most of these short bursts happen when either two neutron stars or a neutron star and a black hole merge.

When a neutron star gets too close to another neutron star or a black hole, the two objects will orbit around each other, creeping closer and closer as they lose some of their energy through gravitational waves.

These objects eventually merge and emit short jets. When the short jets are pointed toward Earth, space telescopes can detect them as short gamma-ray bursts.

Neutron star mergers emit gamma-ray bursts.

Classifying gamma-ray bursts

Classifying bursts as short or long isn’t always that simple. In the past few years, astronomers have discovered some peculiar short gamma-ray bursts associated with supernovae instead of the expected mergers. And they’ve found some long gamma-ray bursts related to mergers instead of supernovae.

These confusing cases show that astronomers do not fully understand how gamma-ray bursts are created. They suggest that astronomers need a better understanding of gamma-ray pulse shapes to better connect the pulses to their origins.

But it’s hard to classify pulse shape, which is different than pulse duration, systematically. Pulse shapes can be extremely diverse and complex. So far, even machine learning algorithms haven’t been able to correctly recognize all the detailed pulse structures that astronomers are interested in.

Community science

My colleagues and I have enlisted the help of volunteers through NASA to identify pulse structures. Volunteers learn to identify the pulse structures, then they look at images on their own computers and classify them.

Our preliminary results suggest that these volunteers – also referred to as citizen scientists – can quickly learn and recognize gamma-ray pulses’ complex structures. Analyzing this data will help astronomers better understand how these mysterious bursts are created.

Our team hopes to learn about whether more gamma-ray bursts in the sample challenge the previous short and long classification. We’ll use the data to more accurately probe the history of the universe through gamma-ray burst observations.

This citizen science project, called Burst Chaser, has grown since our preliminary results, and we’re actively recruiting new volunteers to join our quest to study the mysterious origins behind these bursts.

The Conversation

Amy Lien receives funding from the NASA Citizen Science Seed Funding Program.

Drugs that aren’t antibiotics can also kill bacteria − new method pinpoints how

$
0
0
Many nonantibiotic drugs such as certain antidepressants and antiparasitics have antibacterial effects.Tanja Ivanova/Moment via Getty Images

Human history was forever changed with the discovery of antibiotics in 1928. Infectious diseases such as pneumonia, tuberculosis and sepsis were widespread and lethal until penicillin made them treatable. Surgical procedures that once came with a high risk of infection became safer and more routine. Antibiotics marked a triumphant moment in science that transformed medical practice and saved countless lives.

But antibiotics have an inherent caveat: When overused, bacteria can evolve resistance to these drugs. The World Health Organization estimated that these superbugs caused 1.27 million deaths around the world in 2019 and will likely become an increasing threat to global public health in the coming years.

Microscopy image of a cluster of rod-shaped bacteria stained pink
Mycobacterium tuberculosis is one of many microbial species that have developed resistance against multiple antibiotics.NIAID/Flickr, CC BY

New discoveries are helping scientists face this challenge in innovative ways. Studies have found that nearly a quarter of drugs that aren’t normally prescribed as antibiotics, such as medications used to treat cancer, diabetes and depression, can kill bacteria at doses typically prescribed for people.

Understanding the mechanisms underlying how certain drugs are toxic to bacteria may have far-reaching implications for medicine. If nonantibiotic drugs target bacteria in different ways from standard antibiotics, they could serve as leads in developing new antibiotics. But if nonantibiotics kill bacteria in similar ways to known antibiotics, their prolonged use, such as in the treatment of chronic disease, might inadvertently promote antibiotic resistance.

In our recently published research, my colleagues and I developed a new machine learning method that not only identified how nonantibiotics kill bacteria but can also help find new bacterial targets for antibiotics.

New ways of killing bacteria

Numerous scientists and physicians around the world are tackling the problem of drug resistance, including me and my colleagues in the Mitchell Lab at UMass Chan Medical School. We use the genetics of bacteria to study which mutations make bacteria more resistant or more sensitive to drugs.

When my team and I learned about the widespread antibacterial activity of nonantibiotics, we were consumed by the challenge it posed: figuring out how these drugs kill bacteria.

To answer this question, I used a genetic screening technique my colleagues recently developed to study how anticancer drugs target bacteria. This method identifies which specific genes and cellular processes change when bacteria mutate. Monitoring how these changes influence the survival of bacteria allows researchers to infer the mechanisms these drugs use to kill bacteria.

I collected and analyzed almost 2 million instances of toxicity between 200 drugs and thousands of mutant bacteria. Using a machine learning algorithm I developed to deduce similarities between different drugs, I grouped the drugs together in a network based on how they affected the mutant bacteria.

My maps clearly showed that known antibiotics were tightly grouped together by their known classes of killing mechanisms. For example, all antibiotics that target the cell wall – the thick protective layer surrounding bacterial cells – were grouped together and well separated from antibiotics that interfere with bacteria’s DNA replication.

Intriguingly, when I added nonantibiotic drugs to my analysis, they formed separate hubs from antibiotics. This indicates that nonantibiotic and antibiotic drugs have different ways of killing bacterial cells. While these groupings don’t reveal how each drug specifically kills antibiotics, they show that those clustered together likely work in similar ways.

Gloved hand holding petri dish entirely covered by a film of bacteria except for a small area around a plastic strip
In this antibiotic sensitivity test, the MRSA bacteria colonizing this petri dish won’t grow in the presence of the antibiotic vancomycin.Rodolfo Parulan Jr./Moment via Getty Images

The last piece of the puzzle – whether we could find new drug targets in bacteria to kill them – came from the research of my colleague Carmen Li. She grew hundreds of generations of bacteria that were exposed to different nonantibiotic drugs normally prescribed to treat anxiety, parasite infections and cancer. Sequencing the genomes of bacteria that evolved and adapted to the presence of these drugs allowed us to pinpoint the specific bacterial protein that triclabendazole– a drug used to treat parasite infections – targets to kill the bacteria. Importantly, current antibiotics don’t typically target this protein.

Additionally, we found that two other nonantibiotics that used a similar mechanism as triclabendazole also target the same protein. This demonstrated the power of my drug similarity maps to identify drugs with similar killing mechanisms, even when that mechanism was yet unknown.

Helping antibiotic discovery

Our findings open multiple opportunities for researchers to study how nonantibiotic drugs work differently from standard antibiotics. Our method of mapping and testing drugs also has the potential to address a critical bottleneck in developing antibiotics.

Searching for new antibiotics typically involves sinking considerable resources into screening thousands of chemicals that kill bacteria and figuring out how they work. Most of these chemicals are found to work similarly to existing antibiotics and are discarded.

Our work shows that combining genetic screening with machine learning can help uncover the chemical needle in the haystack that can kill bacteria in ways researchers haven’t used before. There are different ways to kill bacteria we haven’t exploited yet, and there are still roads we can take to fight the threat of bacterial infections and antibiotic resistance.

The Conversation

Mariana Noto Guillen does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


Deepfake detection improves when using algorithms that are more aware of demographic diversity

$
0
0
Deepfake detection software may unfairly target people from some groups.JLco - Ana Suanes/iStock via Getty Images

Deepfakes – essentially putting words in someone else’s mouth in a very believable way – are becoming more sophisticated by the day and increasingly hard to spot. Recent examples of deepfakes include Taylor Swift nude images, an audio recording of President Joe Biden telling New Hampshire residents not to vote, and a video of Ukrainian President Volodymyr Zelenskyy calling on his troops to lay down their arms.

Although companies have created detectors to help spot deepfakes, studies have found that biases in the data used to train these tools can lead to certain demographic groups being unfairly targeted.

a hand holds a smartphone with text on it in front of a screen with a man in front of a lectern
A deepfake of Ukraine President Volodymyr Zelensky in 2022 purported to show him calling on his troops to lay down their arms.Olivier Douliery/AFP via Getty Images

My team and I discovered new methods that improve both the fairness and the accuracy of the algorithms used to detect deepfakes.

To do so, we used a large dataset of facial forgeries that lets researchers like us train our deep-learning approaches. We built our work around the state-of-the-art Xception detection algorithm, which is a widely used foundation for deepfake detection systems and can detect deepfakes with an accuracy of 91.5%.

We created two separate deepfake detection methods intended to encourage fairness.

One was focused on making the algorithm more aware of demographic diversity by labeling datasets by gender and race to minimize errors among underrepresented groups.

The other aimed to improve fairness without relying on demographic labels by focusing instead on features not visible to the human eye.

It turns out the first method worked best. It increased accuracy rates from the 91.5% baseline to 94.17%, which was a bigger increase than our second method as well as several others we tested. Moreover, it increased accuracy while enhancing fairness, which was our main focus.

We believe fairness and accuracy are crucial if the public is to accept artificial intelligence technology. When large language models like ChatGPT “hallucinate,” they can perpetuate erroneous information. This affects public trust and safety.

Likewise, deepfake images and videos can undermine the adoption of AI if they cannot be quickly and accurately detected. Improving the fairness of these detection algorithms so that certain demographic groups aren’t disproportionately harmed by them is a key aspect to this.

Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.

The Conversation

Siwei Lyu receives funding from the National Science Foundation and DARPA.

Yan Ju receives funding from US Defense Advanced Research Projects Agency (DARPA) Semantic Forensic (SemaFor) program, under Contract No. HR001120C0123.

In the age of cancel culture, shaming can be healthy for online communities – a political scientist explains when and how

$
0
0
Public shaming can help uphold online community norms.bo feng/iStock via Getty Images

“Cancel culture” has a bad reputation. There is growing anxiety over this practice of publicly shaming people online for violating social norms ranging from inappropriate jokes to controversial business practices.

Online shaming can be a wildly disproportionate response that violates the privacy of the shamed while offering them no good way to defend themselves. These consequences lead some critics to claim that online shaming creates a “hate storm” that destroys lives and reputations, leaves targets with “permanent digital baggage” and threatens the fundamental right to publicly express yourself in a democracy. As a result, some scholars have declared that online shaming is a “moral wrong and social ill.”

But is online public shaming necessarily negative? I’m a political scientist who studies the relationship between digital technologies and democracy. In my research, I show how public shaming can be a valuable tool for democratic accountability. However, it is more likely to provide these positive effects within a clearly defined community whose members have many overlapping connections.

When shaming helps

Public shaming is a “horizontal” form of social sanctioning, in which people hold one another responsible for violating social norms, rather than appealing to higher authorities to do so. This makes it especially useful in democratic societies, as well as in cases where the shamers face power imbalances or lack access to formal authorities that could hold the shamed accountable.

For example, public shaming can be an effective strategy for challenging corporate power and behavior or maintaining journalistic norms in the face of plagiarism. By harnessingsocial pressure, public shaming can both motivate people to change their behavior and deter future violations by others.

Public shaming has a long history.

But public shaming generally needs to occur in a specific social context to have these positive effects. First, everyone involved must recognizeshared social norms and the shamer’s authority to sanction violations of them. Second, the shamed must care about their reputation. And third, the shaming must be accompanied by the possibility of reintegration, allowing the shamed to atone and be welcomed back into the fold.

This means that public shaming is more likely to deliver accountability in clearly defined communities where members have many overlapping connections, such as schools where all the parents know one another.

In communal spaces where people frequently run into each other, like workplaces, it is more likely that they understand shared social norms and the obligations to follow them. In these environments, it is more likely that people care about what others think of them, and that they know how to apologize when needed so that they can be reintegrated in the community.

Communities that connect

Most online shamings, however, do not take place in this kind of positive social context. On the social platform X, previously known as Twitter, which hosts many high-profile public shamings, users generally lack many shared connections with one another. There is no singular “X community” with universally shared norms, so it is difficult for users to collectively sanction norm violations on the platform.

Moreover, reintegration for targets of shamings on X is nearly impossible, since it is not clear to what community they should apologize, or how they should do so. It should not be surprising, then, that most highly publicized X shamings – like those of PR executive Justine Sacco, who was shamed for a racist tweet in 2013, and Amy Cooper, the “Central Park Karen” – tend to degenerate into campaigns of harassment and stigmatization.

But just because X shamings often turn pathological does notmean all online shamings do. On Threadless, an online community and e-commerce site for artists and designers, users effectively use public shaming to police norms around intellectual property. Wikipedians’ use of public “reverts”– reversals of edits to entries – has helped enforce the encylopedia’s standards even with anonymous contributors. Likewise, Black Twitter has long used the practice of public shaming as an effective mechanism of accountability.

What sets these cases apart is their community structure. Shamings in these contexts are more productive because they occur within clearly defined groups in which members have more shared connections.

Acknowledging these differences in social context helps clarify why, for example, when a Reddit user was shamed by his subcommunity for posting an inappropriate photo, he accepted the rebuke, apologized and was welcomed back into the community. In contrast, those shamed on X often issue vague apologies before disengaging entirely.

The scale and speed of social media can change the dynamics of public shaming when it occurs online.

Crossing online borders

There are still very real consequences of moving public shaming online. Unlike in most offline contexts, online shamings often play out on a massive scale that makes it more difficult for users to understand their connections with one another. Moreover, by creating opportunities to expand and overlap networks, the internet can blur community boundaries in ways that complicate the practice of public shaming and make it more likely to turn pathological.

For example, although the Reddit user was reintegrated into his community, the shaming soon spread to other subreddits, as well as national news outlets, which ultimately led him to delete his Reddit account altogether.

This example suggests that online public shaming is not straightforward. While shaming on X is rarely productive, the practice on other platforms, and in offline spaces characterized by clearly defined communities such as college campuses, can provide important public benefits.

Shaming, like other practices of a healthy democracy, is a tool whose value depends on how it’s used.

The Conversation

Jennifer Forestal has received funding from the National Endowment for the Humanities.

Grizzly bear conservation is as much about human relationships as it is the animals

$
0
0
If the government takes grizzly bears off the Endangered Species List, some states will likely introduce a hunting season. Wolfgang Kaehler/LightRocket via Getty Images

Montanans know spring has officially arrived when grizzly bears emerge from their dens. But unlike the bears, the contentious debate over their future never hibernates. New research from my lab reveals how people’s social identities and the dynamics between social groups may play a larger role in these debates than even the animals themselves.

Social scientists like me work to understand the human dimensions behind wildlife conservation and management. There’s a cliché among wildlife biologists that wildlife management is really people management, and they’re right. My research seeks to understand the psychological and social factors that underlie pressing environmental challenges. It is from this perspective that my team sought to understand how Montanans think about grizzly bears.

To list or delist, that is the question

In 1975, the grizzly bear was listed as threatened under the Endangered Species Act following decades of extermination efforts and habitat loss that severely constrained their range. At that time, there were 700-800 grizzly bears in the lower 48 states, down from a historic 50,000. Today, there are about 2,000 grizzly bears in this area, and sometime in 2024 the U.S. Fish and Wildlife Service will decide whether to maintain their protected status or begin the delisting process.

Listed species are managed by the federal government until they have recovered and management responsibility can return to the states. While listed, federal law prevents hunting of the animal and destruction of grizzly bear habitat. If the animal is delisted, some states intend to implement a grizzly bear hunting season.

People on both sides of the delisting debate often use logic to try to convince others that their position is right. Proponents of delisting say that hunting grizzly bears can help reduce conflict between grizzly bears and humans. Opponents of delisting counter that state agencies cannot be trusted to responsibly manage grizzly bears.

But debates over wildlife might be more complex than these arguments imply.

Identity over facts

Humans have survived because of our evolved ability to cooperate. As a result, human brains are hardwired to favor people who are part of their social groups, even when those groups are randomly assigned and the group members are anonymous.

Humans perceive reality through the lens of their social identities. People are more likely to see a foul committed by a rival sports team than one committed by the team they’re rooting for. When randomly assigned to be part of a group, people will even overlook subconscious racial biases to favor their fellow group members.

Your social identities influence how you interpret your own reality.

Leaders can leverage social identities to inspire cooperation and collective action. For example, during the COVID-19 pandemic, people with strong national identities were more likely to physically distance and support public health policies.

But the forces of social identity have a dark side, too. For example, when people think that another “out-group” is threatening their group, they tend to assume members of the other group hold more extreme positions than they really do. Polarization between groups can worsen when people convince themselves that their group’s positions are inherently right and the other group’s are wrong. In extreme instances, group members can use these beliefs to justify immoral treatment of out-group members.

Empathy reserved for in-group members

These group dynamics help explain people’s attitudes toward grizzly bears in Montana. Although property damage from grizzly bears is extremely rare, affecting far less than 1% of Montanans each year, grizzly bears have been known to break into garages to access food, prey on free-range livestock and sometimes even maul or kill people.

People who hunt tend to have more negative experiences with grizzly bears than nonhunters – usually because hunters are more often living near and moving through grizzly bear habitat.

Two mean wearing jackets and holding shotguns as they walk across a grassy field with a dog.
When hunters hear grizzly bear conflict stories from other hunters, they might favor grizzlies less, even if they’ve never had a negative experience with one themselves.Karl Weatherly/DigitalVision via Getty Images

In a large survey of Montana residents, my team found that one of the most important factors associated with negative attitudes toward grizzly bears was whether someone had heard stories of grizzly bears causing other people property damage. We called this “vicarious property damage.” These negative feelings toward grizzly bears are highly correlated with the belief that there are too many grizzly bears in Montana already.

But we also found an interesting wrinkle in the data. Although hunters extended empathy to other hunters whose properties had been damaged by grizzly bears, nonhunters didn’t show the same courtesy. Because property damage from grizzly bears was far more likely to affect hunters, only other hunters were able to put themselves in their shoes. They felt as though other hunters’ experiences may as well have happened to them, and their attitudes toward grizzly bears were more negative as a result.

For nonhunters, hearing stories about grizzly bears causing damage to hunters’ property did not affect their attitudes toward the animals.

Identity-informed conservation

Recognizing that social identities can play a major role in wildlife conservation debates helps untangle and perhaps prevent some of the conflict. For those wishing to build consensus, there are many psychology-informed strategies for improving relationships between groups.

For example, conversations between members of different groups can help people realize they have shared values. Hearing about a member of your group helping a member of another group can inspire people to extend empathy to out-group members.

Conservation groups and wildlife managers should take care when developing interventions based on social identity to prevent them from backfiring when applied to wildlife conservation issues. Bringing up social identities can sometimes cause unintended division. For example, partisan politics can unnecessarily divide people on environmental issues.

Wildlife professionals can reach their audience more effectively by matching their message and messengers to the social identities of their audience. Some conservation groups have seen success uniting community members who might otherwise be divided around a shared identity associated with their love of a particular place. The conservation group Swan Valley Connections has used this strategy in Montana’s Swan Valley to reduce conflict between grizzly bears and local residents.

Group dynamics can foster cooperation or create division, and the debate over grizzly bear management in Montana is no exception. Who people are and who they care about drives their reactions to this large carnivore. Grizzly bear conservation efforts that unite people around shared identities are far more likely to succeed than those that remind them of their divisions.

The Conversation

Alexander L. Metcalf has received funding from the National Fish and Wildlife Foundation, the National Science Foundation, the Richard King Mellon Foundation, the Pennsylvania Department of Conservation and Natural Resources, the Montana Department of Fish, Wildlife and Parks, the US Geological Survey, and the US Department of Agriculture Forest Service. Dr. Metcalf is an advisor to the Swan Valley Connections board of directors.

Saturn’s ocean moon Enceladus is able to support life − my research team is working out how to detect extraterrestrial cells there

$
0
0
Scientists could one day find traces of life on Enceladus, an ocean-covered moon orbiting Saturn.NASA/JPL-Caltech, CC BY-SA

Saturn has 146 confirmed moons– more than any other planet in the solar system – but one called Enceladus stands out. It appears to have the ingredients for life.

From 2004 to 2017, Cassini– a joint mission between NASA, the European Space Agency and the Italian Space Agency – investigated Saturn, its rings and moons. Cassini delivered spectacular findings. Enceladus, only 313 miles (504 kilometers) in diameter, harbors a liquid water ocean beneath its icy crust that spans the entire moon.

Geysers at the moon’s south pole shoot gas and ice grains formed from the ocean water into space.

Though the Cassini engineers didn’t anticipate analyzing ice grains that Enceladus was actively emitting, they did pack a dust analyzer on the spacecraft. This instrument measured the emitted ice grains individually and told researchers about the composition of the subsurface ocean.

As a planetary scientist and astrobiologist who studies ice grains from Enceladus, I’m interested in whether there is life on this or other icy moons. I also want to understand how scientists like me could detect it.

Ingredients for life

Just like Earth’s oceans, Enceladus’ ocean contains salt, most of which is sodium chloride, commonly known as table salt. The ocean also contains various carbon-based compounds, and it has a process called tidal heating that generates energy within the moon. Liquid water, carbon-based chemistry and energy are all key ingredients for life.

In 2023, I and others scientists found phosphate, another life-supporting compound, in ice grains originating from Enceladus’ ocean. Phosphate, a form of phosphorus, is vital for all life on Earth. It is part of DNA, cell membranes and bones. This was the first time that scientists detected this compound in an extraterrestrial water ocean.

Enceladus’ rocky core likely interacts with the water ocean through hydrothermal vents. These hot, geyserlike structures protrude from the ocean floor. Scientists predict that a similar setting may have been the birthplace of life on Earth.

A diagram showing the inside of a gray moon, which has a hot rocky core.
The interior of Saturn’s moon Enceladus.Surface: NASA/JPL-Caltech/Space Science Institute; interior: LPG-CNRS/U. Nantes/U. Angers. Graphic composition: ESA

Detecting potential life

As of now, nobody has ever detected life beyond Earth. But scientists agree that Enceladus is a very promising place to look for life. So, how do we go about looking?

In a paper published in March 2024, my colleagues and I conducted a laboratory test that simulated whether dust analyzer instruments on spacecraft could detect and identify traces of life in the emitted ice grains.

To simulate the detection of ice grains as dust analyzers in space record them, we used a laboratory setup on Earth. Using this setup, we injected a tiny water beam that contained bacterial cells into a vacuum, where the beam disintegrated into droplets. Each droplet contained, in theory, one bacterial cell.

Then, we shot a laser at the individual droplets, which created charged ions from the water and the cell compounds. We measured the charged ions using a technique called mass spectrometry. These measurements helped us predict what dust analyzer instruments on a spacecraft should find if they encountered a bacterial cell contained in an ice grain.

We found these instruments would do a good job identifying cellular material. Instruments designed to analyze single ice grains should be able to identify bacterial cells, even if there is only 0.01% of the constituents of a single cell in an ice grain from an Enceladus-like geyser.

The analyzers could pick up a number of potential signatures from cellular material, including amino acids and fatty acids. Detected amino acids represent either fragments of the cell’s proteins or metabolites, which are small molecules participating in chemical reactions within the cell. Fatty acids are fragments of lipids that make up the cell’s membranes.

In our experiments, we used a bacteria named Sphingopyxis alaskensis. Cells of this culture are extremely tiny – the same size as cells that might be able to fit into ice grains emitted from Enceladus. In addition to their small size, these cells like cold environments, and they need only a few nutrients to survive and grow, similar to how life adapted to the conditions in Enceladus’ ocean would probably be.

The specific dust analyzer on Cassini didn’t have the analytical capabilities to identify cellular material in the ice grains. However, scientists are already designing instruments with much greater capabilities for potential future Enceladus missions. Our experimental results will inform the planning and design of these instruments.

Future missions

Enceladus is one of the main targets for future missions from NASA and the European Space Agency. In 2022, NASA announced that a mission to Enceladus had the second-highest priority as they picked their next big missions – a Uranus mission had the highest priority.

The European agency recently announced that Enceladus is the top target for its next big mission. This mission would likely include a highly capable dust analyzer for ice grain analysis.

Enceladus isn’t the only moon with a liquid water ocean. Jupiter’s moon Europa also has an ocean that spans the entire moon underneath its icy crust. Ice grains on Europa float up above the surface, and some scientists think Europa may even have geysers like Enceladus that shoot grains into space. Our research will also help study ice grains from Europa.

NASA’s Europa Clipper mission will visit Europa in the coming years. Clipper is scheduled to launch in October 2024 and arrive at Jupiter in April 2030. One of the two mass spectrometers on the spacecraft, the SUrface Dust Analyzer, is designed for single ice grain analysis.

A metal instrument with a circular door open to reveal a mesh strainer designed to catch dust.
The SUrface Dust Analyzer instrument on Clipper will analyze ice grains from Jupiter’s moon Europa.NASA/CU Boulder/Glenn Asakawa

Our study demonstrates that this instrument will be able to find even tiny fractions of a bacterial cell, if present in only a few emitted ice grains.

With these space agencies’ near-future plans and the results of our study, the prospects of upcoming space missions visiting Enceladus or Europa are incredibly exciting. We now know that with current and future instrumentation, scientists should be able to find out whether there is life on any of these moons.

The Conversation

Fabian Klenner is an affiliate of the Europa Clipper mission (SUrface Dust Analyzer instrument). He receives funding from NASA.

Fermented foods sustain both microbiomes and cultural heritage

$
0
0
Each subtle cultural or personal twist to a fermented dish is felt by your body's microbial community.microgen/iStock via Getty Images

Many people around the world make and eat fermented foods. Millions in Korea alone make kimchi. The cultural heritage of these picklers shape not only what they eat every time they crack open a jar but also something much, much smaller: their microbiomes.

On the microbial scale, we are what we eat in very real ways. Your body is teeming with trillions of microbes. These complex ecosystems exist on your skin, inside your mouth and in your gut. They are particularly influenced by your surrounding environment, especially the food you eat. Just like any other ecosystem, your gut microbiome requires diversity to be healthy.

People boil, fry, bake and season meals, transforming them through cultural ideas of “good food.” When people ferment food, they affect the microbiome of their meals directly. Fermentation offers a chance to learn how taste and heritage shape microbiomes: not only of culturally significant foods such as German sauerkraut, kosher pickles, Korean kimchi or Bulgarian yogurt, but of our own guts.

Fermentation uses microbes to transform food.

Our workas anthropologists focuses on how culture transforms food. In fact, we first sketched out our plan to link cultural values and microbiology while writing our Ph.D. dissertations at our local deli in St. Louis, Missouri. Staring down at our pickles and lox, we wondered how the salty, crispy zing of these foods represented the marriage of culture and microbiology.

Equipped with the tools of microbial genetics and cultural anthropology, we were determined to find out.

Science and art of fermentation

Fermentation is the creation of an extreme microbiological environment through salt, acid and lack of oxygen deprivation. It is both an ancient food preservation technique and a way to create distinctive tastes, smells and textures.

Taste is highly variable and something you experience through the layers of your social experience. What may be nauseating in one context is a delicacy in another. Fermented foods are notoriously unsubtle: they bubble, they smell and they zing. Whether and how these pungent foods taste good can be a moment of group pride or a chance to heal social divides.

In each case, cultural notions of good food and heritage recipes combine to create a microbiome in a jar. From this perspective, sauerkraut is a particular ecosystem shaped by German food traditions, kosher dill pickles by Ashkenazi Jewish traditions, and pao cai by southwestern Chinese traditions.

Where culture and microbiology intersect

To begin to understand the effects of culinary traditions and individual creativity on microbiomes, we partnered with Sandor Katz, a fermentation practitioner based in Tennessee. Over the course of four days during one of Katz’s workshops, we made, ate and shared fermented foods with nine fellow participants. Through conversations and interviews, we learned about the unique tastes and meanings we each brought to our love of fermented foods.

Those stories provided context to the 46 food samples we collected and froze to capture a snapshot of the life swimming through kimchi or miso. Participants also collected stool samples each day and mailed in a sample a week after the workshop, preserving a record of the gut microbial communities they created with each bite.

The fermented foods we all made were rich, complex and microbially diverse. Where many store-bought fermented foods are pasteurized to clear out all living microbes and then reinoculated with two to six specific bacterial species, our research showed that homemade ferments contain dozens of strains.

Close-up of a spoonful of homemade yogurt
Eating fermented foods such as yogurt shapes the form and function of your microbiome.Basak Gurbuz Derman/Moment via Getty Images

On the microbiome level, different kinds of fermented foods will have distinct profiles. Just as forests and deserts share ecological features, sauerkrauts and kimchis look more similar to each other than yogurt to cheese.

But just as different habitats have unique combinations of plants and animals, so too did every crock and jar have its own distinct microbial world because of minor differences in preparation or ingredients. The cultural values of taste, creativity and style that create a kimchi or a sauerkraut go on to support distinct microbiomes on those foods and inside the people who eat them.

Through variations in recipes and cultural preferences toward an extra pinch of salt or a disdain for dill, fermentation traditions result in distinctive microbial and taste profiles that your culture trains you to identify as good or bad to eat. That is, our sauerkraut is not your sauerkraut, even if they both might be good for us.

Fermented food as cultural medicine

Microbially rich fermented foods can influence the composition of your gut microbiome. Because your tastes and recipes are culturally informed, those preferences can have a meaningful effect on your gut microbiome. You can eat these foods in ways that introduce microbial diversity, including potentially probiotic microbes that offer benefits to human health such as killing off bacteria that make you ill, improving your cardiovascular health or restoring a healthy gut microbiome after you take antibiotics.

Person passing a dish of kimchi to another person across a table of food
Making and sharing fermented foods can bring people together.Kilito Chan/Moment via Getty Images

Fermentation is an ancient craft, and like all crafts it requires patience, creativity and practice. Cloudy brine is a signal of tasty pickled cucumbers, but it can be a problem for lox. When fermented foods smell rotten, taste too soft or turn red, that can be a sign of contamination by harmful bacteria or molds.

Fermenting foods at home might seem daunting when food is something that comes from the store with a regulatory guarantee. People hoping to take a more active role in creating their food or embracing their own culture’s traditional foods need only time, water and salt to make simple fermented foods. As friends share sourdough starters, yogurt cultures and kombucha mothers, they forge social connections.

Through a unique combination of culture and microbiology, heritage food traditions can support microbial diversity in your gut. These cultural practices provide environments for the yeasts, bacteria and local fruits and grains that in turn sustain heritage foods and flavors.

The Conversation

Andrew Flachs receives funding from the Social Sciences and Humanities Research Council, Purdue University, the American Institute of Indian Studies, the United States Department of Education, the American Institute of Indian Studies, the Social Science Research Council, the Volkswagen Foundation, and the National Geographic Society.

Joseph Orkin receives funding from the Social Sciences and Humanities Research Council of Canada, the Natural Sciences and Engineering Research Council of Canada, the National Institutes of Health, and Université de Montréal

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

$
0
0
AI chatbots restrict their output according to vague and broad policies.taviox/iStock via Getty Images

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, including Google’s Gemini and OpenAI’s ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights law as a benchmark, we found that companies’ misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Our analysis found that companies’ hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google’s can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be allowed to participate in women’s sports tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some usershave reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women’s tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators’ subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of … choice,” according to a key United Nations convention.

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the COVID-19 pandemic to repress criticism of the government. More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies’ policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI’s integration into search, word processors, email and other applications.

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe’s online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union’s 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power comes great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies’ influence should require them to adopt a free speech culture. International human rights provide a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic– have recognized as much.

Outright refusals

It’s also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users receive greatly depends on their prompts. Therefore, users’ exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.

The Conversation

Jordi Calvet-Bademunt is affiliated with The Future of Free Speech. The Future of Free Speech is a non-partisan, independent think tank that has received limited financial support from Google for specific projects. However, Google did not fund the report we refer to in this article. In all cases, The Future of Free Speech retains full independence and final authority for its work, including research pursuits, methodology, analysis, conclusions, and presentation.

Jacob Mchangama is affiliated with The Future of Free Speech. The Future of Free Speech is a non-partisan, independent think tank that has received limited financial support from Google for specific projects. However, Google did not fund the report we refer to in this article. In all cases, The Future of Free Speech retains full independence and final authority for its work, including research pursuits, methodology, analysis, conclusions, and presentation.

Are tomorrow’s engineers ready to face AI’s ethical challenges?

$
0
0
Finding ethics' place in the engineering curriculum.PeopleImages/iStock via Getty Images Plus

A chatbot turns hostile. A test version of a Roomba vacuum collects images of users in private situations. A Black woman is falsely identified as a suspect on the basis of facial recognition software, which tends to be less accurate at identifying women and people of color.

These incidents are not just glitches, but examples of more fundamental problems. As artificial intelligence and machine learning tools become more integrated into daily life, ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread of misinformation.

The general public depends on software engineers and computer scientists to ensure these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering and math education, we are currently researching how engineers in many different fields learn and understand their responsibilities to the public.

Yet our recent research, as well as that of other scholars, points to a troubling reality: The next generation of engineers often seem unprepared to grapple with the social implications of their work. What’s more, some appear apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.

Aware, but unprepared

As part of our ongoing research, we interviewed more than 60 electrical engineering and computer science masters students at a top engineering program in the United States. We asked students about their experiences with ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.

First, the good news: Most students recognized potential dangers of AI and expressed concern about personal privacy and the potential to cause harm – like how race and gender biases can be written into algorithms, intentionally or unintentionally.

One student, for example, expressed dismay at the environmental impact of AI, saying AI companies are using “more and more greenhouse power, [for] minimal benefits.” Others discussed concerns about where and how AIs are being applied, including for military technology and to generate falsified information and images.

When asked, however, “Do you feel equipped to respond in concerning or unethical situations?” students often said no.

“Flat out no. … It is kind of scary,” one student replied. “Do YOU know who I’m supposed to go to?”

Another was troubled by the lack of training: “I [would be] dealing with that with no experience. … Who knows how I’ll react.”

Two young women, one Black and one Asian, sit at a table together as they work on two laptops.
Many students are worried about ethics in their field – but that doesn’t mean they feel prepared to deal with the challenges.The Good Brigade/DigitalVision via Getty Images

Other researchers have similarly found that many engineering students do not feel satisfied with the ethics training they do receive. Common training usually emphasizes professional codes of conduct, rather than the complex socio-technical factors underlying ethical decision-making. Research suggests that even when presented with particular scenarios or case studies, engineering students often struggle to recognize ethical dilemmas.

‘A box to check off’

Accredited engineering programs are required to “include topics related to professional and ethical responsibilities” in some capacity.

Yet ethics training is rarely emphasized in the formal curricula. A study assessing undergraduate STEM curricula in the U.S. found that coverage of ethical issues varied greatly in terms of content, amount and how seriously it is presented. Additionally, an analysis of academic literature about engineering education found that ethics is often considered nonessential training.

Many engineering faculty express dissatisfaction with students’ understanding, but report feeling pressure from engineering colleagues and students themselves to prioritize technical skills in their limited class time.

Researchers in one 2018 study interviewed over 50 engineering faculty and documented hesitancy – and sometimes even outright resistance– toward incorporating public welfare issues into their engineering classes. More than a quarter of professors they interviewed saw ethics and societal impacts as outside “real” engineering work.

About a third of students we interviewed in our ongoing research project share this seeming apathy toward ethics training, referring to ethics classes as “just a box to check off.”

“If I’m paying money to attend ethics class as an engineer, I’m going to be furious,” one said.

These attitudes sometimes extend to how students view engineers’ role in society. One interviewee in our current study, for example, said that an engineer’s “responsibility is just to create that thing, design that thing and … tell people how to use it. [Misusage] issues are not their concern.”

One of us, Erin Cech, followed a cohort of 326 engineering students from four U.S. colleges. This research, published in 2014, suggested that engineers actually became less concerned over the course of their degree about their ethical responsibilities and understanding the public consequences of technology. Following them after they left college, we found that their concerns regarding ethics did not rebound once these new graduates entered the workforce.

Joining the work world

When engineers do receive ethics training as part of their degree, it seems to work.

Along with engineering professor Cynthia Finelli, we conducted a survey of over 500 employed engineers. Engineers who received formal ethics and public welfare training in school are more likely to understand their responsibility to the public in their professional roles, and recognize the need for collective problem solving. Compared to engineers who did not receive training, they were 30% more likely to have noticed an ethical issue in their workplace and 52% more likely to have taken action.

An Asian man wearing glasses stares seriously into space, standing against a holographic background in shades of pink and blue.
The next generation needs to be prepared for ethical questions, not just technical ones.Qi Yang/Moment via Getty Images

Over a quarter of these practicing engineers reported encountering a concerning ethical situation at work. Yet approximately one-third said they have never received training in public welfare – not during their education, and not during their career.

This gap in ethics education raises serious questions about how well-prepared the next generation of engineers will be to navigate the complex ethical landscape of their field, especially when it comes to AI.

To be sure, the burden of watching out for public welfare is not shouldered by engineers, designers and programmers alone. Companies and legislators share the responsibility.

But the people who are designing, testing and fine-tuning this technology are the public’s first line of defense. We believe educational programs owe it to them – and the rest of us – to take this training seriously.

The Conversation

Elana Goldenkoff receives funding from National Science Foundation and Schmidt Futures.

Erin A. Cech receives funding from the National Science Foundation.


TikTok fears point to larger problem: Poor media literacy in the social media age

$
0
0
Tiktok is not the only social media app to pose the threats it's been accused of.picture alliance via Getty Images

The U.S. government moved closer to banning the video social media app TikTok after the House of Representatives attached the measure to an emergency spending bill on Apr. 17, 2024. The move could improve the bill’s chances in the Senate, and President Joe Biden has indicated that he will sign the bill if it reaches his desk.

The bill would force ByteDance, the Chinese company that owns TikTok, to either sell its American holdings to a U.S. company or face a ban in the country. The company has said it will fight any effort to force a sale.

The proposed legislation was motivated by a set of national security concerns. For one, ByteDance can be required to assist the Chinese Communist Party in gathering intelligence, according to the Chinese National Intelligence Law. In other words, the data TikTok collects can, in theory, be used by the Chinese government.

Furthermore, TikTok’s popularity in the United States, and the fact that many young people get their news from the platform – one-third of Americans under the age of 30 – turns it into a potent instrument for Chinese political influence.

Indeed, the U.S. Office of the Director of National Intelligence recently claimed that TikTok accounts run by a Chinese propaganda arm of the government targeted candidates from both political parties during the U.S. midterm election cycle in 2022, and the Chinese Communist Party might attempt to influence the U.S. elections in 2024 in order to sideline critics of China and magnify U.S. social divisions.

To these worries, proponents of the legislation have appended two more arguments: It’s only right to curtail TikTok because China bans most U.S.-based social media networks from operating there, and there would be nothing new in such a ban, since the U.S. already restricts the foreign ownership of important media networks.

Some of these arguments are stronger than others.

China doesn’t need TikTok to collect data about Americans. The Chinese government can buy all the data it wants from data brokers because the U.S. has no federal data privacy laws to speak of. The fact that China, a country that Americans criticize for its authoritarian practices, bans social media platforms is hardly a reason for the U.S. to do the same.

The debate about banning TikTok tends to miss the larger picture of social media literacy.

I believe the cumulative force of these claims is substantial and the legislation, on balance, is plausible. But banning the app is also a red herring.

In the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the impact of AI systems on how people understand themselves. Here’s why I think the recent move against TikTok misses the larger point: Americans’ sources of information have declined in quality and the problem goes beyond any one social media platform.

The deeper problem

Perhaps the most compelling argument for banning TikTok is that the app’s ubiquity and the fact that so many young Americans get their news from it turns it into an effective tool for political influence. But the proposed solution of switching to American ownership of the app ignores an even more fundamental threat.

The deeper problem is not that the Chinese government can easily manipulate content on the app. It is, rather, that people think it is OK to get their news from social media in the first place. In other words, the real national security vulnerability is that people have acquiesced to informing themselves through social media.

Social media is not made to inform people. It is designed to capture consumer attention for the sake of advertisers. With slight variations, that’s the business model of all platforms. That’s why a lot of the content people encounter on social media is violent, divisive and disturbing. Controversial posts that generate strong feelings literally capture users’ notice, hold their gaze for longer, and provide advertisers with improved opportunities to monetize engagement.

There’s an important difference between actively consuming serious, well-vetted information and being manipulated to spend as much time as possible on a platform. The former is the lifeblood of democratic citizenship because being a citizen who participates in political decision-making requires having reliable information on the issues of the day. The latter amounts to letting your attention get hijacked for someone else’s financial gain.

If TikTok is banned, many of its users are likely to migrate to Instagram and YouTube. This would benefit Meta and Google, their parent companies, but it wouldn’t benefit national security. People would still be exposed to as much junk news as before, and experience shows that these social media platforms could be vulnerable to manipulation as well. After all, the Russians primarily used Facebook and Twitter to meddle in the 2016 election.

Media literacy is especially critical in the age of social media.

Media and technology literacy

That Americans have settled on getting their information from outlets that are uninterested in informing them undermines the very requirement of serious political participation, namely educated decision-making. This problem is not going to be solved by restricting access to foreign apps.

Research suggests that it will only be alleviated by inculcating media and technology literacy habits from an early age. This involves teaching young people how social media companies make money, how algorithms shape what they see on their phones, and how different types of content affect them psychologically.

My colleagues and I have just launched a pilot program to boost digital media literacy with the Boston Mayor’s Youth Council. We are talking to Boston’s youth leaders about how the technologies they use everyday undermine their privacy, about the role of algorithms in shaping everything from their taste in music to their political sympathies, and about how generative AI is going to influence their ability to think and write clearly and even who they count as friends.

We are planning to present them with evidence about the adverse effects of excessive social media use on their mental health. We are going to talk to them about taking time away from their phones and developing a healthy skepticism towards what they see on social media.

Protecting people’s capacity for critical thinking is a challenge that calls for bipartisan attention. Some of these measures to boost media and technology literacy might not be popular among tech users and tech companies. But I believe they are necessary for raising thoughtful citizens rather than passive social media consumers who have surrendered their attention to commercial and political actors who do not have their interests at heart.

The Conversation

The Applied Ethics Center at UMass Boston receives funding from the Institute for Ethics and Emerging Technologies. Nir Eisikovits serves as the data ethics advisor to Hour25AI, a startup dedicated to reducing digital distractions.

Chemical pollutants can change your skin bacteria and increase your eczema risk − new research explores how

$
0
0
Certain chemicals in synthetic fabrics such as spandex, nylon and polyester can alter the skin microbiome.SBenitez/Moment via Getty Images

“We haven’t had a full night’s sleep since our son was born eight years ago,” said Mrs. B, pointing to her son’s dry, red and itchy skin.

Her son has had eczema his entire life. Also known as atopic dermatitis, this chronic skin disease affects about 1 in 5 children in the industrialized world. Some studies have found rates ofeczema indeveloping nations to be over thirtyfold lower compared with industrialized nations.

However, rates of eczema didn’t spike with the Industrial Revolution, which began around 1760. Instead, eczema in countries such as the U.S., Finland and other countries started rapidly rising around 1970.

What caused eczema rates to spike?

I am an allergist and immunologist working with a team of researchers to study trends in U.S. eczema rates. Scientists know that factors such as diets rich in processed foods as well as exposure to specific detergents and chemicals increase the risk of developing eczema. Living near factories, major roadways or wildfires increase the risk of developing eczema. Environmental exposures may also come from inside the house through paint, plastics, cigarette smoke or synthetic fabrics such as spandex, nylon and polyester.

While researchers have paid a lot of attention to genetics, the best predictor of whether a child will develop eczema isn’t in their genes but the environment they livedin for theirfirst few years of life.

There’s something in the air

To figure out what environmental changes may have caused a spike in eczema in the U.S., we began by looking for potential eczema hot spots – places with eczema rates that were much higher than the national average. Then we looked at databases from the U.S. Environmental Protection Agency to see which chemicals were most common in those areas.

For eczema, along with the allergic diseases that routinely develop with it– peanut allergy and asthma – two chemical classes leaped off the page: diisocyanates and xylene.

Diisocyanates were first manufactured in the U.S. around 1970 for the production of spandex, nonlatex foam, paint and polyurethane. The manufacture of xylene also increased around that time, alongside an increase in the production of polyester and other materials.

Row of people wearing masks sewing clothes in a textile factory
Chemicals involved in clothing production can be hazardous to health.andresr/E+ via Getty Images

The chemically active portion of the diisocyanates and xylene molecules are also found in cigarette smokeand wildfires. After 1975, when all new cars became outfitted with a new technology that converted exhaust gas to less toxic chemicals, isocyanate and xylene both became components ofautomobile exhaust.

Research has found that exposing mice to isocyanates and xylene can directly cause eczema, itch and inflammation by increasing the activity of receptors involved in itch, pain and temperature sensation. These receptors are also more active in mice placed on unhealthy diets. How directly exposing mice to these toxins compares to the typical levels of exposure in people is still unclear.

How and why might these chemicals be linked to rising rates of eczema?

Skin microbiome and pollution

Every person is coated with millions of microorganisms that live on the skin, collectively referred to as the skin microbiome. While researchers don’t know everything about how friendly bacteria help the skin, we do know that people need these organisms to produce certain types of lipids, or oils, that keep the skin sealed from the environment and stave off infection.

You’ve probably seen moisturizers and other skin products containing ceramides, a group of lipids that play an important role in protecting the skin. The amount of ceramides and related compounds on a child’s skin during their first few weeks of life is a consistent andsignificant predictor of whether they will go on to develop eczema. The less ceramides they have on their skin, the more likely they’ll develop eczema.

Person applying ointment to baby's face
The skin microbiome produces lipids commonly found in moisturizers.FluxFactory/E+ via Getty Images

To see which toxins could prevent production of the beneficial lipids that prevent eczema, my team and I used skin bacteria as canaries in the coal mine. In the lab, we exposed bacteria that directly make ceramides (such as Roseomonas mucosa), bacteria that help the body make its own ceramides (such as Staphylococcus epidermidis) and bacteria that make other beneficial lipids (such as Staphylococcus cohnii) to isocyanates and xylene. We made sure to expose the bacteria to levels of these chemicals that are similar to what people might be exposed to in the real world, such as the standard levels released from a factory or the fumes of polyurethane glue from a hardware store.

We found that exposing these bacteria to isocyanates or xylene led them to stop making ceramides and instead make amino acids such as lysine. Lysine helps protect the bacteria from the harms of the toxins but doesn’t provide the health benefits of ceramides.

We then evaluated how bed sheets manufactured using isocyanates or xylene affect the skin’s bacteria. We found that harmful bacteria such as Staphylococcus aureus proliferated on nylon, spandex and polyester but could not survive on cotton or bamboo. Bacteria that help keep skin healthy could live on any fabric, but, just as with air pollution, the amount of beneficial lipids they made dropped to less than half the levels made when grown on fabrics like cotton.

Addressing pollution’s effects on skin

What can be done about the connection between pollution and eczema?

Detectors capable of sensing low levels of isocyanate or xylene could help track pollutants and predict eczema flare-ups across a community. Better detectors can also help researchers identify air filtration systems that can scrub these chemicals from the environment. Within the U.S., people can use the EPA Toxics Tracker to look up which pollutants are most common near their home.

In the meantime, improving your microbial balance may require avoiding products thatlimit the growth of healthy skin bacteria. This may include certain skin care products, detergents and cleansers. Particularly for kids under 4, avoiding cigarette smoke, synthetic fabrics, nonlatex foams, polyurethanes and some paints may be advised.

Replacing bacteria that has been overly exposed to these chemicals may also help. For example, my research has shown that applying Roseomonas mucosa, a ceramide-producing bacterium that lives on healthy skin, can lead to a monthslong reduction in typical eczema symptoms compared with placebo. Researchers are also studying other potentialprobiotic treatments for eczema.

Evaluating the environmental causes of diseases that have become increasingly common in an increasingly industrialized world can help protect children from chemical triggers of conditions such as eczema. I believe that it may one day allow us to get back to a time when these diseases were uncommon.

The Conversation

Ian Myles receives funding from the Department of Intramural Research at the National Institute of Allergy and Infectious Diseases. He is the author of, and receives royalties for, the book GATTACA Has Fallen: How population genetics failed the populace. Although he is the co-discoverer of Roseomonas mucosa RSM2015 for eczema, he has donated the patent to the public and has no current conflict of interest for its sales.

Transporting hazardous materials across the country isn’t easy − that’s why there’s a host of regulations in place

$
0
0
Hazardous materials regulations make sure that the vehicles carrying them have the right labels. Miguel Perfectti/iStock via Getty Images Plus

Ever wonder what those colorful signs with symbols and numbers on the backs of trucks mean? They’re just one visible part of a web of regulations that aim to keep workers and the environment safe while shipping hazardous waste.

Transporting hazardous materials such as dangerous gases, poisons, harmful chemicals, corrosives and radioactive material across the country is risky. But because approximately 3 billion pounds of hazardous material needs to go from place to place in the U.S. each year, it’s unavoidable.

With all the material that needs to cross the country, hazardous material spills from both truckand rail transportation are relatively unavoidable. But good regulations can keep these incidents to a minimum.

As an operations and logistics expert, I’ve studied hazardous materials transportation for years. Government agencies from the municipal to federal levels have rules governing the handling and transportation of these materials, though they can be a little complicated.

A hazardous material is anything that can cause a health or safety risk to people or the environment. Regulators put hazardous materials into nine categories and rate them based on the level of danger they pose during transport and handling.

These ratings help anyone associated with the shipment take precautions and figure out the right packaging and transportation methods for each type of hazardous material.

Who regulates hazardous material?

A number of agencies across the country closely scrutinize the entire hazardous materials supply chain from start to finish. The Occupational Safety and Health Administration regulates the proper handling of hazardous materials where they’re either manufactured or used. OSHA puts limits on how much hazardous material one person can be exposed to and for how long.

If the material spills, or if there’s any left over when they’re done being used, the U.S. Environmental Protection Agency, handles its disposal. Both EPA and OSHA regulations come into play during spills.

In between, the U.S. Department of Transportation regulates all of the movement of hazardous materials through four of its administrations.

The Pipeline and Hazardous Materials Safety Administration regulates the transportation of hazardous materials by truck, rail, pipeline and ship. The Federal Railroad Administration plays a role in regulating rail shipments, just as the Federal Highway Administration oversees movement over the road. In the air, the Federal Aviation Administration regulates hazardous materials.

Key regulations

Two essential regulations govern the handling and transportation of hazardous materials. In 1975, the EPA published the Hazardous Material Transportation Act, which protects people and property from hazardous material transportation risks.

This act gave the secretary of transportation more regulatory and enforcement authority than before. It gave the secretary power to designate materials as hazardous, add packaging requirements and come up with operating rules.

The Pipeline and Hazardous Materials Safety Administration oversees hazardous materials regulations that apply to everything from packaging and labeling to loading and unloading procedures. They also include training requirements for workers who have to handle hazardous materials and plans to make sure these materials stay secure.

Along with the Federal Highway Administration, the Pipeline and Hazardous Materials Safety Administration and the Federal Motor Carrier Safety Administration regulate hazardous material movement by road.

A white label reading
Hazardous material regulations require proper labeling of trucks carrying materials.BanksPhotos/E+ via Getty Images

Trucking companies transporting hazardous materials need to use specific vehicles and qualified drivers to comply with Federal Motor Carrier Safety Administration regulations. Drivers transporting hazardous materials must have specialized training and a hazardous materials endorsement on their commercial driver’s license.

The Pipeline and Hazardous Materials Safety Administration’s and the Federal Railroad Administration’s regulations for rail shipments require that rail cars fit physical and structural specifications. These specifications include having thick tanks and pressure release devices. Rail cars also have to undergo inspections and maintenance, per these rules.

The crew in charge of a hazardous materials train needs specialized training. And rail carriers need to have emergency response plans in case of a hazardous material spill.

Both truck and rail companies must follow regulations that require the proper classification, packaging and labeling of hazardous materials. The symbols on these labels let handlers and emergency responders know the potential risks the materials pose.

The Pipeline and Hazardous Materials Safety Administration’s security regulations prevent theft or sabotage of hazardous materials. They make sure that only authorized people can access the shipments. These regulations may require background checks for workers, secure storage facilities, and systems that track and monitor hazardous material.

Hazardous material shipments and incidents both have increased in the past 10 years. Anyone involved in the supply chain needs to understand hazardous material regulations.

Sticking to these rules helps get these materials from place to place safely. It also keeps safe those who handle them and minimizes the risk of accidents, injuries and environmental harm.

The Conversation

Michael F. Gorman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.





Latest Images