Harvard-MIT Quantum Computing Breakthrough – “We Are Entering a Completely New Part of the Quantum World”
Harvard-MIT Quantum Computing Breakthrough – “We Are Entering a Completely New Part of the Quantum World”

Advanced Quantum Computer Concept

Team develops simulator with 256 qubits, largest of its kind ever created.

A team of physicists from the Harvard-MIT Center for Ultracold Atoms and other universities has developed a special type of quantum computer known as a programmable quantum simulator capable of operating with 256 quantum bits, or “qubits.”

The system marks a major step toward building large-scale quantum machines that could be used to shed light on a host of complex quantum processes and eventually help bring about real-world breakthroughs in material science, communication technologies, finance, and many other fields, overcoming research hurdles that are beyond the capabilities of even the fastest supercomputers today. Qubits are the fundamental building blocks on which quantum computers run and the source of their massive processing power.

“This moves the field into a new domain where no one has ever been to thus far,” said Mikhail Lukin, the George Vasmer Leverett Professor of Physics, co-director of the Harvard Quantum Initiative, and one of the senior authors of the study published on July 7, 2021, in the journal Nature. “We are entering a completely new part of the quantum world.”

Dolev Bluvstein, Mikhail Lukin, and Sepehr Ebadi

Dolev Bluvstein (from left), Mikhail Lukin, and Sepehr Ebadi developed a special type of quantum computer known as a programmable quantum simulator. Ebadi is aligning the device that allows them to create the programmable optical tweezers. Credit: Rose Lincoln/Harvard Staff Photographer

According to Sepehr Ebadi, a physics student in the Graduate School of Arts and Sciences and the study’s lead author, it is the combination of system’s unprecedented size and programmability that puts it at the cutting edge of the race for a quantum computer, which harnesses the mysterious properties of matter at extremely small scales to greatly advance processing power. Under the right circumstances, the increase in qubits means the system can store and process exponentially more information than the classical bits on which standard computers run.

“The number of quantum states that are possible with only 256 qubits exceeds the number of atoms in the solar system,” Ebadi said, explaining the system’s vast size.

Already, the simulator has allowed researchers to observe several exotic quantum states of matter that had never before been realized experimentally, and to perform a quantum phase transition study so precise that it serves as the textbook example of how magnetism works at the quantum level.

Fun Atom Video

By arranging them in sequential frames and taking images of single atoms, the researchers can even make fun atom videos. Credit: Courtesy of Lukin group

These experiments provide powerful insights on the quantum physics underlying material properties and can help show scientists how to design new materials with exotic properties.

The project uses a significantly upgraded version of a platform the researchers developed in 2017, which was capable of reaching a size of 51 qubits. That older system allowed the researchers to capture ultra-cold rubidium atoms and arrange them in a specific order using a one-dimensional array of individually focused laser beams called optical tweezers.

This new system allows the atoms to be assembled in two-dimensional arrays of optical tweezers. This increases the achievable system size from 51 to 256 qubits. Using the tweezers, researchers can arrange the atoms in defect-free patterns and create programmable shapes like square, honeycomb, or triangular lattices to engineer different interactions between the qubits.

Dolev Bluvstein

Dolev Bluvstein looks at 420 mm laser that allows them to control and entangle Rydberg atoms. Credit: Harvard University

“The workhorse of this new platform is a device called the spatial light modulator, which is used to shape an optical wavefront to produce hundreds of individually focused optical tweezer beams,” said Ebadi. “These devices are essentially the same as what is used inside a computer projector to display images on a screen, but we have adapted them to be a critical component of our quantum simulator.”

The initial loading of the atoms into the optical tweezers is random, and the researchers must move the atoms around to arrange them into their target geometries. The researchers use a second set of moving optical tweezers to drag the atoms to their desired locations, eliminating the initial randomness. Lasers give the researchers complete control over the positioning of the atomic qubits and their coherent quantum manipulation.

Other senior authors of the study include Harvard Professors Subir Sachdev and Markus Greiner, who worked on the project along with Massachusetts Institute of Technology Professor Vladan Vuletić, and scientists from Stanford, the University of California Berkeley, the University of Innsbruck in Austria, the Austrian Academy of Sciences, and QuEra Computing Inc. in Boston.

“Our work is part of a really intense, high-visibility global race to build bigger and better quantum computers,” said Tout Wang, a research associate in physics at Harvard and one of the paper’s authors. “The overall effort [beyond our own] has top academic research institutions involved and major private-sector investment from Google, IBM, Amazon, and many others.”

The researchers are currently working to improve the system by improving laser control over qubits and making the system more programmable. They are also actively exploring how the system can be used for new applications, ranging from probing exotic forms of quantum matter to solving challenging real-world problems that can be naturally encoded on the qubits.

“This work enables a vast number of new scientific directions,” Ebadi said. “We are nowhere near the limits of what can be done with these systems.”

Reference: “Quantum phases of matter on a 256-atom programmable quantum simulator” by Sepehr Ebadi, Tout T. Wang, Harry Levine, Alexander Keesling, Giulia Semeghini, Ahmed Omran, Dolev Bluvstein, Rhine Samajdar, Hannes Pichler, Wen Wei Ho, Soonwon Choi, Subir Sachdev, Markus Greiner, Vladan Vuletić and Mikhail D. Lukin, 7 July 2021, Nature.
DOI: 10.1038/s41586-021-03582-4

This work was supported by the Center for Ultracold Atoms, the National Science Foundation, the Vannevar Bush Faculty Fellowship, the U.S. Department of Energy, the Office of Naval Research, the Army Research Office MURI, and the DARPA ONISQ program.

Ancient Diamonds Show Earth Was Primed for Life’s Explosion of Diversity at Least 2.7 Billion Years Ago
Ancient Diamonds Show Earth Was Primed for Life’s Explosion of Diversity at Least 2.7 Billion Years Ago
2.7 Billion Year-Old Diamond

One of the 2.7 billion year-old diamonds used in this work. Credit: Michael Broadley

A unique study of ancient diamonds has shown that the basic chemical composition of the Earth’s atmosphere which makes it suitable for life’s explosion of diversity was laid down at least 2.7 billion years ago. Volatile gases conserved in diamonds found in ancient rocks were present in similar proportions to those found in today’s mantle, which in turn indicates that there has been no fundamental change in the proportions of volatiles in the atmosphere over the last few billion years. This shows that one of the basic conditions necessary to support life, the presence of life-giving elements in sufficient quantity, appeared soon after Earth formed, and has remained fairly constant ever since.

Presenting the work at the Goldschmidt Geochemistry Conference, lead researcher Dr. Michael Broadly said, “The proportion and make-up of volatiles in the atmosphere reflects that found in the mantle, and we have no evidence of a significant change since these diamonds were formed 2.7 billion years ago.”

Volatiles, such as hydrogen, nitrogen, neon, and carbon-bearing species are light chemical elements and compounds, which can be readily vaporized due to heat, or pressure changes. They are necessary for life, especially carbon and nitrogen. Not all planets are rich in volatiles; Earth is volatile-rich, as is Venus, but Mars and the Moon lost most of their volatiles into space. Generally, a planet rich in volatiles has a better chance of sustaining life, which is why much of the search for life on planets surrounding distant stars (exoplanets) has focused on looking for volatiles.

Diagram of Earth's Layers

Diagram of the Earth’s layers, showing the position the diamonds were formed in the Upper Mantle. Credit: Michael Broadley

On Earth, volatile substances mostly bubble up from the inside of the planet, and are brought to the surface through such things as volcanic eruptions. Knowing when the volatiles arrived in the Earth’s atmosphere is key to understanding when the conditions on Earth were suitable for the origin and development of life, but until now there has been no way of understanding these conditions in the deep past.

Now French and Canadian researchers have used ancient diamonds as a time capsule, to examine the conditions deep inside the Earth’s mantle in the distant past. Studies of the gases trapped in these diamonds show that the volatile composition of the mantle has changed little over the last 2.7 billion years.

Lead researcher, Michael Broadley (University of Lorraine, France) said “Studying the composition of the Earth’s modern mantle is relatively simple. On average the mantle layer begins around 30km below the Earth’s surface, and so we can collect samples thrown up by volcanoes and study the fluids and gases trapped inside. However, the constant churning of the Earth’s crust via plate tectonics means that older samples have mostly been destroyed. Diamonds however, are comparatively indestructible, they’re ideal time capsules.”

We managed to study diamonds trapped in 2.7 billion-year-old highly preserved rock from Wawa, on Lake Superior in Canada. This means that the diamonds are at least as old as the rocks they are found in — probably older. It’s difficult to date diamonds, so this gave us a lucky opportunity to be sure of the minimum age. These diamonds are incredibly rare, and are not like the beautiful gems we think of when we think of diamonds. We heated them to over 2000 C to transform them into graphite, which then released tiny quantities of gas for measurement.”

The team measured the isotopes of Helium, Neon, and Argon, and found that they were present in similar proportions to those found in the upper mantle today. This means that there has probably been little change in the proportion of volatiles generally, and that the distribution of essential volatile elements between the mantle and the atmosphere are likely to have remained fairly stable throughout the majority of Earth’s life. The mantle is the part between the Earth’s crust and the core, it comprises around 84% of the Earth’s volume.

Dr. Broadley continued “This was a surprising result. It means the volatile-rich environment we see around us today is not a recent development, so providing the right conditions for life to develop. Our work shows that these conditions were present at least 2.7 billion years ago, but the diamonds we use may be much older, so it’s likely that these conditions were set well before our 2.7 billion year threshold.”

Commenting, Dr. Suzette Timmerman (University of Alberta, Canada) said:

“Diamonds are unique samples, as they lock in compositions during their formation. The Wawa fibrous diamonds specifically were a great selection to study — being more than 2.7 billion years old — and they provide important clues into the volatile composition in this period, the Neoarchean period. It is interesting that the upper mantle already appears degassed more than 2.7 billion years ago. This work is an important step towards understanding the mantle (and atmosphere) in the first half of Earth’s history and leads the way to further questions and research.”

NASA's NEOWISE Asteroid-Hunting Space Telescope Gets Two-Year Mission Extension
NASA’s NEOWISE Asteroid-Hunting Space Telescope Gets Two-Year Mission Extension
Wide field Infrared Survey Explorer

Artist’s concept of NASA’s WISE (Wide-field Infrared Survey Explorer) spacecraft, which was an infrared-wavelength astronomical space telescope active from December 2009 to February 2011. In September 2013 the spacecraft was assigned a new mission as NEOWISE to help find near-Earth asteroids and comets. Credits: NASA/JPL-Caltech

NEOWISE has provided an estimate of the size of over 1,850 near-Earth objects, helping us better understand our nearest solar system neighbors.

For two more years, NASA’s Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) will continue its hunt for asteroids and comets – including objects that could pose a hazard to Earth. This mission extension means NASA’s prolific near-Earth object (NEO) hunting space telescope will continue operations until June 2023.

“At NASA, we’re always looking up, surveying the sky daily to find potential hazards and exploring asteroids to help unlock the secrets of the formation of our solar system,” said NASA Administrator Bill Nelson. “Using ground-based telescopes, over 26,000 near-Earth asteroids have already been discovered, but there are many more to be found. We’ll enhance our observations with space-based capabilities like NEOWISE and the future, much more capable NEO Surveyor to find the remaining unknown asteroids more quickly and identify potentially-hazardous asteroids and comets before they are a threat to us here on Earth.”

Originally launched as the Wide-field Infrared Survey Explorer (WISE) mission in December 2009, the space telescope surveyed the entire sky in infrared wavelengths, detecting asteroids, dim stars, and some of the faintest galaxies visible in deep space. WISE completed its primary mission when it depleted its cryogenic coolant and it was put into hibernation in February 2011. Observations resumed in December 2013 when the space telescope was repurposed by NASA’s Planetary Science Division as “NEOWISE” to identify asteroids and comets throughout the solar system, with special attention to those that pass close to Earth’s orbit.

“NEOWISE provides a unique and critical capability in our global mission of planetary defense, by allowing us to rapidly measure the infrared emission and more accurately estimate the size of hazardous asteroids as they are discovered,” said Lindley Johnson, NASA’s Planetary Defense Officer and head of the Planetary Defense Coordination Office (PDCO) at NASA Headquarters in Washington. “Extending NEOWISE’s mission highlights not only the important work that is being done to safeguard our planet, but also the valuable science that is being collected about the asteroids and comets further out in space.”

Comet NEOWISE Tucson

Comet NEOWISE—which was Discovered on March 27, 2020, by NASA’s Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) mission—captured on July 6, 2020, above the northeast horizon just before sunrise in Tucson. Credit: Vishnu Reddy

As asteroids are heated by the Sun, they warm up and release this heat as faint infrared radiation. By studying this infrared signature, scientists can reveal the size of an asteroid and compare it to the measurements of observations made by optical telescopes on the ground. This information can help us understand how reflective its surface is, while also providing clues as to its composition.

To date, NEOWISE has provided an estimate of the size of over 1,850 NEOs, helping us better understand our nearest solar system neighbors. As of March 2021, the mission has made 1,130,000 confirmed infrared observations of approximately 39,100 objects throughout the solar system since its restart in 2013. Mission data is shared freely by the IPAC/Caltech-led archive and the data has contributed to over 1,600 peer-reviewed studies. The University of Arizona is also a key partner of the NEOWISE mission as the home institution of the NEOWISE principal investigator, Amy Mainzer, who is a professor of planetary science at the University’s Lunar and Planetary Laboratory.

Among its many accomplishments after its reactivation, NEOWISE also discovered Comet NEOWISE, which was named after the mission and dazzled observers worldwide in 2020.

NEOWISE’s replacement, the next-generation NEO Surveyor, is currently scheduled to launch in 2026, and will greatly expand on what we have learned, and continue to learn, from NEOWISE.

“NEOWISE has taught us a lot about how to find, track, and characterize Earth-approaching asteroids and comets using a space-based infrared telescope,” said Mainzer. “The mission serves as an important precursor for carrying out a more comprehensive search for these objects using the new telescope we’re building, the NEO Surveyor.” Mainzer is also the lead of the NEO Surveyor mission.

The NEOWISE project is managed by NASA’s Jet Propulsion Laboratory in Southern California, a division of Caltech, and the University of Arizona, supported by NASA’s PDCO.

Protein “Big Bang” Reveals Molecular Makeup for Medicine and Bioengineering Applications
Protein “Big Bang” Reveals Molecular Makeup for Medicine and Bioengineering Applications
Gustavo Caetano-Anollés Surrounded by Depiction of Molecular Networks

Research by Gustavo Caetano-Anollés and Fayez Aziz, University of Illinois, reveals a “big bang” during evolution of protein subunits known as domains. The team looked for protein relationships and domain recruitment into proteins over 3.8 billion years across all taxonomic units. Their results could have implications for vaccine development and disease management. Credit: Fred Zwicky, University of Illinois

Proteins have been quietly taking over our lives since the COVID-19 pandemic began. We’ve been living at the whim of the virus’s so-called “spike” protein, which has mutated dozens of times to create increasingly deadly variants. But the truth is, we have always been ruled by proteins. At the cellular level, they’re responsible for pretty much everything.

Proteins are so fundamental that DNA – the genetic material that makes each of us unique – is essentially just a long sequence of protein blueprints. That’s true for animals, plants, fungi, bacteria, archaea, and even viruses. And just as those groups of organisms evolve and change over time, so too do proteins and their component parts.

A new study from University of Illinois researchers, published in Scientific Reports, maps the evolutionary history and interrelationships of protein domains, the subunits of protein molecules, over 3.8 billion years.

“Knowing how and why domains combine in proteins during evolution could help scientists understand and engineer the activity of proteins for medicine and bioengineering applications. For example, these insights could guide disease management, such as making better vaccines from the spike protein of COVID-19 viruses,” says Gustavo Caetano-Anollés, professor in the Department of Crop Sciences, affiliate of the Carl R. Woese Institute for Genomic Biology at Illinois, and senior author on the paper.

Caetano-Anollés has studied the evolution of COVID mutations since the early stages of the pandemic, but that timeline represents a vanishingly tiny fraction of what he and doctoral student Fayez Aziz took on in their current study.

The researchers compiled sequences and structures of millions of protein sequences encoded in hundreds of genomes across all taxonomic groups, including higher organisms and microbes. They focused not on whole proteins, but instead on structural domains.

“Most proteins are made of more than one domain. These are compact structural units, or modules, that harbor specialized functions,” Caetano-Anollés says. “More importantly, they are the units of evolution.”

After sorting proteins into domains to build evolutionary trees, they set to work building a network to understand how domains have developed and been shared across proteins throughout billions of years of evolution.

“We built a time series of networks that describe how domains have accumulated and how proteins have rearranged their domains through evolution. This is the first time such a network of ‘domain organization’ has been studied as an evolutionary chronology,” Fayez Aziz says. “Our survey revealed there is a vast evolving network describing how domains combine with each other in proteins.”

Each link of the network represents a moment when a particular domain was recruited into a protein, typically to perform a new function.

“This fact alone strongly suggests domain recruitment is a powerful force in nature,” Fayez Aziz says. The chronology also revealed which domains contributed important protein functions. For example, the researchers were able to trace the origins of domains responsible for environmental sensing as well as secondary metabolites, or toxins used in bacterial and plant defenses.

The analysis showed domains started to combine early in protein evolution, but there were also periods of explosive network growth. For example, the researchers describe a “big bang” of domain combinations 1.5 billion years ago, coinciding with the rise of multicellular organisms and eukaryotes, organisms with membrane-bound nuclei that include humans.

The existence of biological big bangs is not new. Caetano-Anollés’ team previously reported the massive and early origin of metabolism, and they recently found it again when tracking the history of metabolic networks.

The historical record of a big bang describing the evolutionary patchwork of proteins provides new tools to understand protein makeup.

“This could help identify, for example, why structural variations and genomic recombinations occur often in SARS-CoV-2,” Caetano-Anollés says.

He adds that this new way of understanding proteins could help prevent pandemics by dissecting how virus diseases originate. It could also help mitigate disease by improving vaccine design when outbreaks occur.

Reference: “Evolution of networks of protein domain organization” by M. Fayez Aziz and Gustavo Caetano-Anollés, 8 June 2021, Scientific Reports.
DOI: 10.1038/s41598-021-90498-8

The work was supported by the National Science Foundation and the U.S. Department of Agriculture.

The Department of Crop Sciences is in the College of Agricultural, Consumer and Environmental Sciences at the University of Illinois.

Deep Space Atomic Clock to Improve GPS, Increase Spacecraft Autonomy
Deep Space Atomic Clock to Improve GPS, Increase Spacecraft Autonomy
Deep Space Atomic Clock Illustration

NASA’s Deep Space Atomic Clock has been operating aboard the General Atomics Orbital Test Bed satellite since June 2019. This illustration shows the spacecraft in Earth orbit. Credit: General Atomics Electromagnetic Systems

Designed to improve navigation for robotic explorers and the operation of GPS satellites, the technology demonstration reports a significant milestone.

Spacecraft that venture beyond our Moon rely on communication with ground stations on Earth to figure out where they are and where they’re going. NASA’s Deep Space Atomic Clock is working toward giving those far-flung explorers more autonomy when navigating. In a new paper published on June 30, 2021, in the journal Nature, the mission reports progress in their work to improve the ability of space-based atomic clocks to measure time consistently over long periods.

Known as stability, this feature also impacts the operation of GPS satellites that help people navigate on Earth, so this work also has the potential to increase the autonomy of next-generation GPS spacecraft.

> Related: What Is an Atomic Clock?

To calculate the trajectory of a distant spacecraft, engineers send signals from the spacecraft to Earth and back. They use refrigerator-size atomic clocks on the ground to log the timing of those signals, which is essential for precisely measuring the spacecraft’s position. But for robots on Mars or more distant destinations, waiting for the signals to make the trip can quickly add up to tens of minutes or even hours.

If those spacecraft carried atomic clocks, they could calculate their own position and direction, but the clocks would have to be highly stable. GPS satellites carry atomic clocks to help us get to our destinations on Earth, but those clocks require updates several times a day to maintain the necessary level of stability. Deep space missions would require more stable space-based clocks.

Deep Space Atomic Clock General Atomics Electromagnetic Systems Orbital Test Bed

A glimpse of the Deep Space Atomic Clock in the middle bay of the General Atomics Electromagnetic Systems Orbital Test Bed spacecraft. Credit: NASA

Managed by NASA’s Jet Propulsion Laboratory in Southern California, the Deep Space Atomic Clock has been operating aboard General Atomic’s Orbital Test Bed spacecraft since June 2019. The new study reports that the mission team has set a new record for long-term atomic clock stability in space, reaching more than 10 times the stability of current space-based atomic clocks, including those on GPS satellites.

When Every Nanosecond Counts

All atomic clocks have some degree of instability that leads to an offset in the clock’s time versus the actual time. If not corrected, the offset, while minuscule, increases rapidly, and with spacecraft navigation, even a tiny offset could have drastic effects.

One of the key goals of the Deep Space Atomic Clock mission was to measure the clock’s stability over longer and longer periods, to see how it changes with time. In the new paper, the team reports a level of stability that leads to a time deviation of less than four nanoseconds after more than 20 days of operation.

“As a general rule, an uncertainty of one nanosecond in time corresponds to a distance uncertainty of about one foot,” said Eric Burt, an atomic clock physicist for the mission at JPL and co-author of the new paper. “Some GPS clocks must be updated several times a day to maintain this level of stability, and that means GPS is highly dependent on communication with the ground. The Deep Space Atomic Clock pushes this out to a week or more, thus potentially giving an application like GPS much more autonomy.”

The stability and subsequent time delay reported in the new paper is about five times better than what the team reported in the spring of 2020. This does not represent an improvement in the clock itself, but in the team’s measurement of the clock’s stability. Longer operating periods and almost a full year of additional data made it possible to improve the precision of their measurement.

NASA's Deep Space Atomic Clock

NASA’s Deep Space Atomic Clock could revolutionize deep space navigation. One key requirement for the technology demonstration was a compact design. The complete hardware package is shown here and is only about 10 inches (25 centimeters) on each side. Credit: NASA/JPL-Caltech

The Deep Space Atomic Clock mission will conclude in August, but NASA announced that work on this technology continues: the Deep Space Atomic Clock-2, an improved version of the cutting-edge timekeeper, will fly on the VERITAS (short for Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy) mission to Venus. Like its predecessor, the new space clock is a technology demonstration, meaning its goal is to advance in-space capabilities by developing instruments, hardware, software, or the like that doesn’t currently exist. Built by JPL and funded by NASA’s Space Technology Mission Directorate (STMD), the ultra-precise clock signal generated with this technology could help enable autonomous spacecraft navigation and enhance radio science observations on future missions.

Deep Space Atomic Clock Heart

A computer-aided design, or CAD, drawing of the linear ion trap of the clock – the “heart” of the Deep Space Atomic Clock’s physics package – is slightly smaller than two rolls of quarters laid side by side. The DSAC project is a small, low-mass atomic clock based on mercury-ion trap technology that will be demonstrated in space, providing unprecedented stability needed for next-generation deep space navigation and radio science. Credit: NASA/JPL

“NASA’s selection of Deep Space Atomic Clock-2 on VERITAS speaks to this technology’s promise,” said Todd Ely, Deep Space Atomic Clock principal investigator and project manager at JPL. “On VERITAS, we aim to put this next generation space clock through its paces and demonstrate its potential for deep space navigation and science.”

Reference: “Demonstration of a trapped-ion atomic clock in space’ by E. A. Burt, J. D. Prestage, R. L. Tjoelker, D. G. Enzer, D. Kuang, D. W. Murphy, D. E. Robison, J. M. Seubert, R. T. Wang and T. A. Ely, 30 June 2021, Nature.
DOI: 10.1038/s41586-021-03571-7

More About the Mission

The Deep Space Atomic Clock is hosted on a spacecraft provided by General Atomics Electromagnetic Systems of Englewood, Colorado. It is sponsored by STMD’s Technology Demonstration Missions program located at NASA’s Marshall Space Flight Center in Huntsville, Alabama, and NASA’s Space Communications and Navigation (SCaN) program within NASA’s Human Exploration and Operations Mission Directorate. JPL manages the project.

Diet Rich in Omega 3 Fatty Acids May Help Reduce Migraine Headaches
Diet Rich in Omega 3 Fatty Acids May Help Reduce Migraine Headaches

Food Rich in Omega 3

Trial provides ‘grounds for optimism’ for many people with persistent headaches and those who care for them.

Eating a diet rich in omega 3 (n-3) fatty acids reduces the frequency of headaches compared with a diet with normal intake of omega 3 and omega 6 (n-6) fatty acids, finds a study published by The BMJ today (June 30, 2021).

Modern industrialized diets tend to be low in omega 3 fatty acids and high in omega 6 fatty acids. These fatty acids are precursors to oxylipins — molecules involved in regulating pain and inflammation.

Oxylipins derived from omega 3 fatty acids are associated with pain-reducing effects, while oxylipins derived from omega 6 fatty acids worsen pain and can provoke migraine. But previous studies evaluating omega 3 fatty acid supplements for migraine have been inconclusive.

So a team of US researchers wanted to find out whether diets rich in omega 3 fatty acids would increase levels of the pain-reducing 17-hydroxydocosahexaenoic acid (17-HDHA) and reduce the frequency and severity of headaches.

Their results are based on 182 patients at the University of North Carolina, USA (88% female; average age 38 years) with migraine headaches on 5-20 days per month who were randomly assigned to one of three diets for 16 weeks.

The control diet included typical levels of omega 3 and omega 6 fatty acids. Both interventional diets raised omega 3 fatty acid intake. One kept omega 6 acid intake the same as the control diet, and the other concurrently lowered omega 6 acid intake.

During the trial, participants received regular dietary counseling and access to online support information. They also completed the headache impact test (HIT-6) — a questionnaire assessing headache impact on quality of life. Headache frequency was assessed daily with an electronic diary.

Over the 16 weeks, both interventional diets increased 17-HDHA levels compared with the control diet, and while HIT-6 scores improved in both interventional groups, they were not statistically significantly different from the control group.

However, headache frequency was statistically significantly decreased in both intervention groups.

The high omega 3 diet was associated with a reduction of 1.3 headache hours per day and two headache days per month. The high omega 3 plus low omega 6 diet group saw a reduction of 1.7 headache hours per day and four headache days per month, suggesting additional benefit from lowering dietary omega-6 fatty acid.

Participants in the intervention groups also reported shorter and less severe headaches compared with those in the control group.

This was a high quality, well designed trial, but the researchers do point to some limitations, such as the difficulty for patients to stick to a strict diet and the fact that most participants were relatively young women so results may not apply to children, older adults, men, or other populations.

“While the diets did not significantly improve quality of life, they produced large, robust reductions in frequency and severity of headaches relative to the control diet,” they write.

“This study provides a biologically plausible demonstration that pain can be treated through targeted dietary alterations in humans. Collective findings suggest causal mechanisms linking n-3 and n-6 fatty acids to [pain regulation], and open the door to new approaches for managing chronic pain in humans,” they conclude.

These results support recommending a high omega 3 diet to patients in clinical practice, says Rebecca Burch at the Brigham and Women’s Hospital, in a linked editorial.

She acknowledges that interpretation of this study’s findings is complex, but points out that trials of recently approved drugs for migraine prevention reported reductions of around 2-2.5 headache days per month compared with placebo, suggesting that a dietary intervention can be comparable or better.

What’s more, many people with migraine are highly motivated and interested in dietary changes, she adds. These findings “take us one step closer to a goal long sought by headache patients and those who care for them: a migraine diet backed up by robust clinical trial results.”

References:

“Dietary alteration of n-3 and n-6 fatty acids for headache reduction in adults with migraine: randomized controlled trial” 30 June 2021, The BMJ.
DOI: 10.1136/bmj.n1448

“Dietary omega 3 fatty acids for migraine” 30 June 2021, The BMJ.
DOI: 10.1136/bmj.n1535

Funding: National Institutes of Health (NIH); National Center for Complementary and Integrative Health (NCCIH)

Making Seawater Drinkable in Minutes: A New Alternative Desalination Membrane
Making Seawater Drinkable in Minutes: A New Alternative Desalination Membrane
Co Axial Electrospinning Device Schematic

Schematic of co-axial electrospinning device. Credit: Elsevier

A new alternative seawater desalination membrane to produce drinking water.

According to the World Health Organization, about 785 million people around the world lack a clean source of drinking water. Despite the vast amount of water on Earth, most of it is seawater and freshwater accounts for only about 2.5% of the total. One of the ways to provide clean drinking water is to desalinate seawater. The Korea Institute of Civil Engineering and Building Technology (KICT) has announced the development of a stable performance electrospun nanofiber membrane to turn seawater into drinking water by membrane distillation process.

Membrane wetting is the most challenging issue in membrane distillation. If a membrane exhibits wetting during membrane distillation operation, the membrane must be replaced. Progressive membrane wetting has been especially observed for long-term operations. If a membrane gets fully wetted, the membrane leads to inefficient membrane distillation performance, as the feed flow through the membrane leading to low-quality permeate.

A research team in KICT, led by Dr. Yunchul Woo, has developed co-axial electrospun nanofiber membranes fabricated by an alternative nano-technology, which is electrospinning. This new desalination technology shows it has the potential to help solve the world’s freshwater shortage. The developed technology can prevent wetting issues and also improve the long-term stability in membrane distillation process. A three-dimensional hierarchical structure should be formed by the nanofibers in the membranes for higher surface roughness and hence better hydrophobicity.

Co-Axial Electrospun Nanofiber Membrane

Merits of co-axial electrospun nanofiber membrane. Credit: Elsevier

The co-axial electrospinning technique is one of the most favorable and simple options to fabricate membranes with three-dimensional hierarchical structures. Dr. Woo’s research team used poly(vinylidene fluoride-co-hexafluoropropylene) as the core and silica aerogel mixed with a low concentration of the polymer as the sheath to produce a co-axial composite membrane and obtain a superhydrophobic membrane surface. In fact, silica aerogel exhibited a much lower thermal conductivity compared with that of conventional polymers, which led to increased water vapor flux during the membrane distillation process due to a reduction of conductive heat losses.

Most of the studies using electrospun nanofiber membranes in membrane distillation applications operated for less than 50 hours although they exhibited a high water vapor flux performance. On the contrary, Dr. Woo’s research team applied the membrane distillation process using the fabricated co-axial electrospun nanofiber membrane for 30 days, which is 1 month.

The co-axial electrospun nanofiber membrane performed a 99.99% salt rejection for 1 month. Based on the results, the membrane operated well without wetting and fouling issues, due to its low sliding angle and thermal conductivity properties. Temperature polarization is one of the significant drawbacks in membrane distillation. It can decrease water vapor flux performance during membrane distillation operation due to conductive heat losses. The membrane is suitable for long-term membrane distillation applications as it possesses several important characteristics such as, low sliding angle, low thermal conductivity, avoiding temperature polarization, and reduced wetting and fouling problems whilst maintaining super-saturated high water vapor flux performance.

Dr. Woo’s research team noted that it is more important to have a stable process than a high water vapor flux performance in a commercially available membrane distillation process. Dr. Woo said that “the co-axial electrospun nanofiber membrane have strong potential for the treatment of seawater solutions without suffering from wetting issues and may be the appropriate membrane for pilot-scale and real-scale membrane distillation applications.”

Reference: “Co-axially electrospun superhydrophobic nanofiber membranes with 3D-hierarchically structured surface for desalination by long-term membrane distillation” by Yun Chul Woo, Minwei Yao, Wang-Geun Shim, Youngjin Kim, Leonard D. Tijing, Bumsuk Jung, Seung-Hyun Kim and Ho Kyong Shon, 4 January 2021, Journal of Membrane Science.
DOI: 10.1016/j.memsci.2020.119028

The Korea Institute of Civil Engineering and Building Technology (KICT) is a government-sponsored research institute established to contribute to the development of Korea’s construction industry and national economic growth by developing source and practical technology in the fields of construction and national land management.

This research was supported by an internal grant (20200543-001) from the KICT, Republic of Korea. The outcomes of this project were published in the international journal, Journal of Membrane Science, a renowned international journal in the polymer science field (IF: 7.183 and Rank #3 of the JCR category) in April 2021.

Analysis of Thousands of Drugs Reveals Potential New COVID-19 Antivirals
Analysis of Thousands of Drugs Reveals Potential New COVID-19 Antivirals

Doctors Science Defeating Coronavirus

Researchers at the Francis Crick Institute and University of Dundee have screened thousands of drug and chemical molecules and identified a range of potential antivirals that could be developed into new treatments for COVID-19 or in preparation for future coronavirus outbreaks.

While COVID-19 vaccines are being rolled out, there are still few drug options that can be used to treat patients with the virus, to reduce symptoms and speed up recovery time. These treatments are especially important for groups where the vaccines are less effective, such as some patients with blood cancers.

In a series of seven papers, published today (July 2, 2021) in the Biochemical Journal, the scientists identified 15 molecules which inhibit the growth of SARS-CoV-2 by blocking different enzymes involved in its replication.

The researchers developed and ran tests for around 5,000 molecules provided by the Crick’s High Throughput Screening team to see if any of these effectively blocked the functioning of any of seven SARS-CoV-2 enzymes. The tests were based on fluorescent changes with a special imaging tool detecting if enzymes had been affected.

They then validated and tested the potential inhibitors against SARS-CoV-2 in the lab, to determine if they effectively slowed viral growth. The team found at least one inhibitor for all seven enzymes.

Three of the molecules identified are existing drugs, used to treat other diseases. Lomeguatrib is used in melanoma and has few side-effects, suramin is a treatment for African sleeping sickness and river blindness and trifluperidol is used in cases of mania and schizophrenia. As there is existing safety data on these drugs, it may be possible to more quickly develop these into SARS-CoV-2 antivirals.

John Diffley, lead author of the papers and associate research director and head of the Chromosome Replication Laboratory at the Crick, said: “We’ve developed a chemical toolbox of information about potential new COVID-19 drugs. We hope this attracts attention from scientists with the drug development and clinical expertise needed to test these further, and ultimately see if any could become safe and effective treatments for COVID-19 patients.”

The 15 molecules were also tested in combination with remdesivir, an antiviral being used to treat patients with COVID-19. Four of these, all which target the SARS-CoV-2 enzyme Nsp14 mRNA Cap methyltransferase, were found to improve the effectiveness of this antiviral in lab tests.

The scientists now plan to run tests to see if any pairing of the 15 molecules they identified decrease the virus’ growth more than if they are used alone. Targeting enzymes involved in virus replication could also help prepare for future viral pandemics.

“Proteins on the outside of viruses evolve rapidly but within different classes of viruses are well conserved proteins that change very little with time,” adds John.

“If we can develop drugs that inhibit these proteins, in the situation of a future pandemic, they could provide a valuable first line of defense, before vaccines become available.”

References:

“Identification of SARS-CoV-2 Antiviral Compounds by Screening for Small Molecule Inhibitors of the nsp14 RNA Cap Methyltransferase” by Basu, S. et al., 2 July 2021, Biochemical Journal.
DOI: 10.1042/BCJ20210219

“Identifying SARS-CoV-2 Antiviral Compounds by Screening for Small Molecule Inhibitors of nsp5 Main Protease” by Milligan, J. et al., 2 July 2021, Biochemical Journal.
DOI: 10.1042/BCJ20210197

“Identifying SARS-CoV-2 Antiviral Compounds by Screening for Small Molecule Inhibitors of Nsp12/7/8 RNA-dependent RNA Polymerase” by Bertolin, A. et al., 2 July 2021, Biochemical Journal.
DOI: 10.1042/BCJ20210200

“Identifying SARS-CoV-2 Antiviral Compounds by Screening for Small Molecule Inhibitors of Nsp13 Helicase” by Zeng, J. et al., 2 July 2021, Biochemical Journal.
DOI: 10.1042/BCJ20210201

“Identifying SARS-CoV-2 Antiviral Compounds by Screening for Small Molecule Inhibitors of Nsp3 Papain-like Protease” by Lim, CT. et al., 2 July 2021, Biochemical Journal.
DOI: 10.1042/BCJ20210244

“Identifying SARS-CoV-2 Antiviral Compounds by Screening for Small Molecule Inhibitors of Nsp15 Endoribonuclease” by Canal, B. et al., 2 July 2021, Biochemical Journal.
DOI: 10.1042/BCJ20210199

“Identifying SARS-CoV-2 Antiviral Compounds by Screening for Small Molecule Inhibitors of Nsp14/nsp10 Exoribonuclease” by Canal, B. et al., 2 July 2021, Biochemical Journal.
DOI: 10.1042/BCJ20210198

The Francis Crick Institute is a biomedical discovery institute dedicated to understanding the fundamental biology underlying health and disease. Its work is helping to understand why disease develops and to translate discoveries into new ways to prevent, diagnose and treat illnesses such as cancer, heart disease, stroke, infections, and neurodegenerative diseases.

An independent organization, its founding partners are the Medical Research Council (MRC), Cancer Research UK, Wellcome, UCL (University College London), Imperial College London, and King’s College London.

The Crick was formed in 2015, and in 2016 it moved into a brand new state-of-the-art building in central London which brings together 1500 scientists and support staff working collaboratively across disciplines, making it the biggest biomedical research facility under a single roof in Europe.

Hawking’s Black Hole Theorem Confirmed Observationally for the First Time
Hawking’s Black Hole Theorem Confirmed Observationally for the First Time
Two Black Holes Collide Merge

An artist’s impression of two black holes about to collide and merge.

Study offers evidence, based on gravitational waves, to show that the total area of a black hole’s event horizon can never decrease.

There are certain rules that even the most extreme objects in the universe must obey. A central law for black holes predicts that the area of their event horizons — the boundary beyond which nothing can ever escape — should never shrink. This law is Hawking’s area theorem, named after physicist Stephen Hawking, who derived the theorem in 1971.

Fifty years later, physicists at MIT and elsewhere have now confirmed Hawking’s area theorem for the first time, using observations of gravitational waves. Their results appear today (July 1, 2021) in Physical Review Letters.

In the study, the researchers take a closer look at GW150914, the first gravitational wave signal detected by the Laser Interferometer Gravitational-wave Observatory (LIGO), in 2015. The signal was a product of two inspiraling black holes that generated a new black hole, along with a huge amount of energy that rippled across space-time as gravitational waves.

If Hawking’s area theorem holds, then the horizon area of the new black hole should not be smaller than the total horizon area of its parent black holes. In the new study, the physicists reanalyzed the signal from GW150914 before and after the cosmic collision and found that indeed, the total event horizon area did not decrease after the merger — a result that they report with 95 percent confidence.

Collision of Two Black Holes GW150914

Physicists at MIT and elsewhere have used gravitational waves to observationally confirm Hawking’s black hole area theorem for the first time. This computer simulation shows the collision of two black holes that produced the gravitational wave signal, GW150914. Credit: Simulating eXtreme Spacetimes (SXS) project. Credit: Courtesy of LIGO

Their findings mark the first direct observational confirmation of Hawking’s area theorem, which has been proven mathematically but never observed in nature until now. The team plans to test future gravitational-wave signals to see if they might further confirm Hawking’s theorem or be a sign of new, law-bending physics.

“It is possible that there’s a zoo of different compact objects, and while some of them are the black holes that follow Einstein and Hawking’s laws, others may be slightly different beasts,” says lead author Maximiliano Isi, a NASA Einstein Postdoctoral Fellow in MIT’s Kavli Institute for Astrophysics and Space Research. “So, it’s not like you do this test once and it’s over. You do this once, and it’s the beginning.”

Isi’s co-authors on the paper are Will Farr of Stony Brook University and the Flatiron Institute’s Center for Computational Astrophysics, Matthew Giesler of Cornell University, Mark Scheel of Caltech, and Saul Teukolsky of Cornell University and Caltech.

An age of insights

In 1971, Stephen Hawking proposed the area theorem, which set off a series of fundamental insights about black hole mechanics. The theorem predicts that the total area of a black hole’s event horizon — and all black holes in the universe, for that matter — should never decrease. The statement was a curious parallel of the second law of thermodynamics, which states that the entropy, or degree of disorder within an object, should also never decrease.

The similarity between the two theories suggested that black holes could behave as thermal, heat-emitting objects — a confounding proposition, as black holes by their very nature were thought to never let energy escape, or radiate. Hawking eventually squared the two ideas in 1974, showing that black holes could have entropy and emit radiation over very long timescales if their quantum effects were taken into account. This phenomenon was dubbed “Hawking radiation” and remains one of the most fundamental revelations about black holes.

“It all started with Hawking’s realization that the total horizon area in black holes can never go down,” Isi says. “The area law encapsulates a golden age in the ’70s where all these insights were being produced.”

Hawking and others have since shown that the area theorem works out mathematically, but there had been no way to check it against nature until LIGO’s first detection of gravitational waves.

Hawking, on hearing of the result, quickly contacted LIGO co-founder Kip Thorne, the Feynman Professor of Theoretical Physics at Caltech. His question: Could the detection confirm the area theorem?

At the time, researchers did not have the ability to pick out the necessary information within the signal, before and after the merger, to determine whether the final horizon area did not decrease, as Hawking’s theorem would assume. It wasn’t until several years later, and the development of a technique by Isi and his colleagues, when testing the area law became feasible.

Before and after

In 2019, Isi and his colleagues developed a technique to extract the reverberations immediately following GW150914’s peak — the moment when the two parent black holes collided to form a new black hole. The team used the technique to pick out specific frequencies, or tones of the otherwise noisy aftermath, that they could use to calculate the final black hole’s mass and spin.

A black hole’s mass and spin are directly related to the area of its event horizon, and Thorne, recalling Hawking’s query, approached them with a follow-up: Could they use the same technique to compare the signal before and after the merger, and confirm the area theorem?

The researchers took on the challenge, and again split the GW150914 signal at its peak. They developed a model to analyze the signal before the peak, corresponding to the two inspiraling black holes, and to identify the mass and spin of both black holes before they merged. From these estimates, they calculated their total horizon areas — an estimate roughly equal to about 235,000 square kilometers, or roughly nine times the area of Massachusetts.

They then used their previous technique to extract the “ringdown,” or reverberations of the newly formed black hole, from which they calculated its mass and spin, and ultimately its horizon area, which they found was equivalent to 367,000 square kilometers (approximately 13 times the Bay State’s area).

“The data show with overwhelming confidence that the horizon area increased after the merger, and that the area law is satisfied with very high probability,” Isi says. “It was a relief that our result does agree with the paradigm that we expect, and does confirm our understanding of these complicated black hole mergers.”

The team plans to further test Hawking’s area theorem, and other longstanding theories of black hole mechanics, using data from LIGO and Virgo, its counterpart in Italy.

“It’s encouraging that we can think in new, creative ways about gravitational-wave data, and reach questions we thought we couldn’t before,” Isi says. “We can keep teasing out pieces of information that speak directly to the pillars of what we think we understand. One day, this data may reveal something we didn’t expect.”

Reference: “Testing the Black-Hole Area Law with GW150914” by Maximiliano Isi, Will M. Farr, Matthew Giesler, Mark A. Scheel and Saul A. Teukolsky, 1 July 2021, Physical Review Letters.
DOI: 10.1103/PhysRevLett.127.011103

This research was supported, in part, by NASA, the Simons Foundation, and the National Science Foundation.

Worrying New Insights Into the Chemicals in Plastics – Significant Risk to People and the Environment
Worrying New Insights Into the Chemicals in Plastics – Significant Risk to People and the Environment

Baby Plastic Toy

Plastic is practical, cheap and incredibly popular. Every year, more than 350 million metric tons are produced worldwide. These plastics contain a huge variety of chemicals that may be released during their lifecycles — including substances that pose a significant risk to people and the environment. However, only a small proportion of the chemicals contained in plastic are publicly known or have been extensively studied.

A team of researchers led by Stefanie Hellweg, ETH Professor of Ecological Systems Design, has for a first time compiled a comprehensive database of plastic monomers, additives and processing aids for use in the production and processing of plastics on the world market, and systematically categorized them on the basis of usage patterns and hazard potential. The study, just published in the scientific journal Environmental Science & Technology, provides an enlightening but worrying insight into the world of chemicals that are intentionally added to plastics.

A high level of chemical diversity

The team identified around 10,500 chemicals in plastic. Many are used in packaging (2,489), textiles (2,429) and food-contact applications (2,109); some are for toys (522) and medical devices, including masks (247). Of the 10,500 substances identified, the researchers categorized 2,480 substances (24 percent) as substances of potential concern.

“This means that almost a quarter of all the chemicals used in plastic are either highly stable, accumulate in organisms or are toxic. These substances are often toxic to aquatic life, cause cancer or damage specific organs,” explains Helene Wiesinger, doctoral student at the Chair of Ecological Systems Design and lead author of the study. About half are chemicals with high production volumes in the EU or the US.

“It is particularly striking that many of the questionable substances are barely regulated or are ambiguously described,” continues Wiesinger.

In fact, 53 percent of all the substances of potential concern are not regulated in the US, the EU or Japan. More surprisingly, 901 hazardous substances are approved for use in food contact plastics in these regions. Finally, scientific studies are lacking for about 10 percent of the identified substances of potential concern.

Plastic monomers, additives and processing aids

Plastics are made of organic polymers built up from repeating monomer units. A wide variety of additives, such as antioxidants, plasticisers and flame retardants, give the polymer matrix the desired properties. Catalysts, solvents and other chemicals are also used as processing aids in production.

“Until now, research, industry and regulators have mainly concentrated on a limited number of dangerous chemicals known to be present in plastics,” says Wiesinger. Today, plastic packaging is seen as a main source of organic contamination in food, while phthalate plasticisers and brominated flame retardants are detectable in house dust and indoor air. Earlier studies have already indicated that significantly more plastic chemicals used worldwide are potentially hazardous.

Nevertheless, the results of the inventory came as an unpleasant surprise to the researchers. “The unexpectedly high number of substances of potential concern is worrying,” says Zhanyun Wang, senior scientist in Hellweg’s group. Exposure to such substances can have a negative impact on the health of consumers and workers and on polluted ecosystems. Problematic chemicals can also affect recycling processes and the safety and quality of recycled plastics.

Wang stresses that even more chemicals in plastics could be problematic. “Recorded hazard data are often limited and scattered. For 4,100 or 39 percent of all the substances we identified, we were not able to categorize them due to a lack of hazard classifications” he says.

A lack of data and transparency

The two researchers identified the lack of transparency in chemicals in plastics and dispersed data silos as a main problem. In over two and a half years of detective work, they combed through more than 190 publicly accessible data sources from research, industry and authorities and identified 60 sources with sufficient information about intentionally added substances in plastics. “We found multiple critical knowledge and data gaps, in particular for the substances and their actual uses. This ultimately hinders consumers’ choice of safe plastic products,” they say.

Wiesinger and Wang are pursuing the goal of a sustainable circular plastic economy. They see an acute need for effective global chemicals management; such a system would have to be transparent and independent, and oversee all hazardous substances in full. The two researchers say that open and easy access to reliable information is crucial.

Reference: “Deep Dive into Plastic Monomers, Additives, and Processing Aids” by Helene Wiesinger, Zhanyun Wang and Stefanie Hellweg, 21 June 2021, Environmental Science and Technology.
DOI: 10.1021/acs.est.1c00976

Optical Tweezer Technology Breakthrough Overcomes Dangers of Heat
Optical Tweezer Technology Breakthrough Overcomes Dangers of Heat
Optical Tweezers Use Light to Trap Particles

Optical tweezers use light to trap particles for analysis. A new breakthrough keeps those particles from overheating. Credit: The University of Texas at Austin

Three years ago, Arthur Ashkin won the Nobel Prize for inventing optical tweezers, which use light in the form of a high-powered laser beam to capture and manipulate particles. Despite being created decades ago, optical tweezers still lead to major breakthroughs and are widely used today to study biological systems.

However, optical tweezers do have flaws. The prolonged interaction with the laser beam can alter molecules and particles or damage them with excessive heat.

Researchers at The University of Texas at Austin have created a new version of optical tweezer technology that fixes this problem, a development that could open the already highly regarded tools to new types of research and simplify processes for using them today.

The breakthrough that avoids this problem of overheating comes out of a combination of two concepts: the use of a substrate composed of materials that are cooled when a light is shined on them (in this case, a laser); and a concept called thermophoresis, a phenomenon in which mobile particles will commonly gravitate toward a cooler environment.

The cooler materials attract particles, making them easier to isolate, while also protecting them from overheating. By solving the heat problem, optical tweezers could become more widely used to study biomolecules, DNA, diseases and more.

“Optical tweezers have many advantages, but they are limited because whenever the light captures objects, they heat up,” said Yuebing Zheng, the corresponding author of a new paper published in Science Advances and an associate professor in the Walker Department of Mechanical Engineering. “Our tool addresses this critical challenge; instead of heating the trapped objects, we have them controlled at a lower temperature.”

Optical tweezers do the same thing as regular tweezers — pick up small objects and manipulate them. However, optical tweezers work at a much smaller scale and use light to capture and move objects.

Analyzing DNA is a common use of optical tweezers. But doing so requires attaching nano-sized glass beads to the particles. Then to move the particles, the laser is shined on the beads, not the particles themselves, because the DNA would be damaged by the heating effect of the light.

“When you are forced to add more steps to the process, you increase uncertainty because now you have introduced something else into the biological system that may impact it,” Zheng said.

This new and improved version of optical tweezers eliminates these extra steps.

The team’s next steps include developing autonomous control systems, making them easier for people without specialized training to use and extending the tweezers’ capabilities to handle biological fluids such as blood and urine. And they are working to commercialize the discovery.

Zheng and his team have much variety in their research, but it all centers on light and how it interacts with materials. Because of this focus on light, he has closely followed, and used, optical tweezers in his research. The researchers were familiar with thermophoresis and hoped they could trigger it with cooler materials, which would actually draw particles to the laser to simplify analysis.

Reference: “Opto-refrigerative tweezers” by Jingang Li, Zhihan Chen, Yaoran Liu, Pavana Siddhartha Kollipara, Yichao Feng, Zhenglong Zhang and Yuebing Zheng, 25 June 2021, Science Advances.
DOI: 10.1126/sciadv.abh1101

This research was supported by grants from the National Institutes of Health’s National Institute of General Medical Sciences and the National Science Foundation. Other authors are Jingang Li and Zhihan Chen of UT’s Texas Materials Institute; Yaoran Liu of the Department of Electrical and Computer Engineering; Pavana Siddhartha Kollipara of the Walker Department of Mechanical Engineering; and Yichao Feng and Zhenglong Zhang of Shaanxi Normal University’s School of Physics and Information in China.

Uncovering Genetic Traces to Discover How Humans Adapted to Historical Coronavirus Outbreaks
Uncovering Genetic Traces to Discover How Humans Adapted to Historical Coronavirus Outbreaks
Coronavirus Graphic

Coronavirus graphic. Credit: Gerd Altmann

An international team of researchers co-led by the University of Adelaide and the University of Arizona has analyzed the genomes of more than 2,500 modern humans from 26 worldwide populations, to better understand how humans have adapted to historical coronavirus outbreaks.

In a paper published in Current Biology, the researchers used cutting-edge computational methods to uncover genetic traces of adaptation to coronaviruses, the family of viruses responsible for three major outbreaks in the last 20 years, including the ongoing pandemic.

“Modern human genomes contain evolutionary information tracing back hundreds of thousands of years, however, it’s only in the past few decades geneticists have learned how to decode the extensive information captured within our genomes,” said lead author Dr. Yassine Souilmi, with the University of Adelaide’s School of Biological Sciences.

“This includes physiological and immunological ‘adaptions’ that have enabled humans to survive new threats, including viruses.

“Viruses are very simple creatures with the sole objective to make more copies of themselves. Their simple biological structure renders them incapable of reproducing by themselves so they must invade the cells of other organisms and hijack their molecular machinery to exist.”

Yassine Souilmi

Lead author Dr. Yassine Souilmi Australian Centre for Ancient DNA, School of Biological Sciences, The University of Adelaide. Credit: The University of Adelaide

Viral invasions involve attaching and interacting with specific proteins produced by the host cell known as viral interacting proteins (VIPs).

In the study, researchers found signs of adaptation in 42 different human genes encoding VIPs.

“We found VIP signals in five populations from East Asia and suggest the ancestors of modern East Asians were first exposed to coronaviruses over 20,000 years ago,” said Dr. Souilmi.

“We found the 42 VIPs are primarily active in the lungs — the tissue most affected by coronaviruses — and confirmed that they interact directly with the virus underlying the current pandemic.”

Ray Tobler

Dr. Ray Tobler, Australian Centre for Ancient DNA, within the University of Adelaide’s School of Biological Sciences. Credit: The University of Adelaide

Other independent studies have shown that mutations in VIP genes may mediate coronavirus susceptibility and also the severity of COVID-19 symptoms. And several VIPs are either currently being used in drugs for COVID-19 treatments or are part of clinical trials for further drug development.

“Our past interactions with viruses have left telltale genetic signals that we can leverage to identify genes influencing infection and disease in modern populations, and can inform drug repurposing efforts and the development of new treatments,” said co-author Dr. Ray Tobler, from the University of Adelaide’s School of Biological Sciences.

“By uncovering the genes previously impacted by historical viral outbreaks, our study points to the promise of evolutionary genetic analyses as a new tool in fighting the outbreaks of the future,” said Dr. Souilmi.

The researchers also note that their results in no way supersede pre-existing public health policies and protections, such as mask-wearing, social distancing, and vaccinations.

Reference: “An ancient viral epidemic involving host coronavirus interacting genes more than 20,000 years ago in East Asia” by Yassine Souilmi, M. Elise Lauterbur, Ray Tobler, Christian D. Huber, Angad S. Johar, Shayli Varasteh Moradi, Wayne A. Johnston, Nevan J. Krogan, Kirill Alexandrov and David Enard, 24 June 2021, Current Biology.
DOI: 10.1016/j.cub.2021.05.067

The team involved in this study also included researchers from Australian National University and Queensland University of Technology.

Scientists Can Now Design Single Atom Catalysts for Important Chemical Reactions
Scientists Can Now Design Single Atom Catalysts for Important Chemical Reactions
Single Rhodium Atom Alloy Catalyzes Propane to Propene Reaction

Artistic rendering of the propane dehydrogenation process taking place on the novel single atom alloy catalyst, as predicted by theory. The picture shows the transition state obtained from a quantum chemistry calculation on a supercomputer, i.e. the molecular configuration of maximum energy along the reaction path. Credit: Charles Sykes & Michail Stamatakis

Using fundamental calculations of molecular interactions, they created a catalyst with 100% selectivity in producing propylene, a key precursor to plastics and fabric manufacturing.

Researchers at Tufts University, University College London (UCL), Cambridge University and University of California at Santa Barbara have demonstrated that a catalyst can indeed be an agent of change. In a study published today in Science, they used quantum chemical simulations run on supercomputers to predict a new catalyst architecture as well as its interactions with certain chemicals, and demonstrated in practice its ability to produce propylene – currently in short supply – which is critically needed in the manufacture of plastics, fabrics and other chemicals. The improvements have potential for highly efficient, “greener” chemistry with a lower carbon footprint.

The demand for propylene is about 100 million metric tons per year (worth about $200 billion), and there is simply not enough available at this time to meet surging demand. Next to sulfuric acid and ethylene, its production involves the third largest conversion process in the chemical industry by scale. The most common method for producing propylene and ethylene is steam cracking, which has a yield limited to 85% and is one of the most energy intensive processes in the chemical industry. The traditional feedstocks for producing propylene are by-products from oil and gas operations, but the shift to shale gas has limited its production.

Typical catalysts used in the production of propylene from propane found in shale gas are made up of combinations of metals that can have a random, complex structure at the atomic level. The reactive atoms are usually clustered together in many different ways making it difficult to design new catalysts for reactions, based on fundamental calculations on how the chemicals might interact with the catalytic surface.

By contrast, single-atom alloy catalysts, discovered at Tufts University and first reported in Science in 2012, disperse single reactive metal atoms in a more inert catalyst surface, at a density of about 1 reactive atom to 100 inert atoms. This enables a well-defined interaction between a single catalytic atom and the chemical being processed without being compounded by extraneous interactions with other reactive metals nearby. Reactions catalyzed by single-atom alloys tend to be clean and efficient, and, as demonstrated in the current study, they are now predictable by theorical methods.

“We took a new approach to the problem by using first principles calculations run on supercomputers with our collaborators at University College London and Cambridge University, which enabled us to predict what the best catalyst would be for converting propane into propylene,” said Charles Sykes, the John Wade Professor in the Department of Chemistry at Tufts University and corresponding author of the study.

These calculations which led to predictions of reactivity on the catalyst surface were confirmed by atomic-scale imaging and reactions run on model catalysts. The researchers then synthesized single-atom alloy nanoparticle catalysts and tested them under industrially relevant conditions. In this particular application, rhodium (Rh) atoms dispersed on a copper (Cu) surface worked best to dehydrogenate propane to make propylene.

“Improvement of commonly used heterogeneous catalysts has mostly been a trial-and-error process,” said Michail Stamatakis, associate professor of chemical engineering at UCL and co-corresponding author of the study. “The single-atom catalysts allow us to calculate from first principles how molecules and atoms interact with each other at the catalytic surface, thereby predicting reaction outcomes. In this case, we predicted rhodium would be very effective at pulling hydrogens off molecules like methane and propane – a prediction that ran counter to common wisdom but nevertheless turned out to be incredibly successful when put into practice. We now have a new method for the rational design of catalysts.”

The single atom Rh catalyst was highly efficient, with 100% selective production of the product propylene, compared to 90% for current industrial propylene production catalysts, where selectivity refers to the proportion of reactions at the surface that leads to the desired product. “That level of efficiency could lead to large cost savings and millions of tons of carbon dioxide not being emitted into the atmosphere if it’s adopted by industry,” said Sykes.

Not only are the single atom alloy catalysts more efficient, but they also tend to run reactions under milder conditions and lower temperatures and thus require less energy to run than conventional catalysts. They can be cheaper to produce, requiring only a small fraction of precious metals like platinum or rhodium, which can be very expensive. For example, the price of rhodium is currently around $22,000 per ounce, while copper, which comprises 99% of the catalyst, costs just 30 cents an ounce. The new rhodium/copper single-atom alloy catalysts are also resistant to coking – a ubiquitous problem in industrial catalytic reactions in which high carbon content intermediates — basically, soot — build up on the surface of the catalyst and begin inhibiting the desired reactions. These improvements are a recipe for “greener” chemistry with a lower carbon footprint.

“This work further demonstrates the great potential of single-atom alloy catalysts for addressing inefficiencies in the catalyst industry, which in turn has very large economic and environmental payoffs,” said Sykes.

Reference: “First-principles design of a single-atom–alloy propane dehydrogenation catalyst” by Ryan T. Hannagan, Georgios Giannakakis, Romain Réocreux, Julia Schumann, Jordan Finzel, Yicheng Wang, Angelos Michaelides, Prashant Deshlahra, Phillip Christopher, Maria Flytzani-Stephanopoulos, Michail Stamatakis and E. Charles H. Sykes, 25 June 2021, Science.
DOI: 10.1126/science.abg8389

Are We Missing Other Earths? Dramatic New Evidence Uncovered by Astronomers
Are We Missing Other Earths? Dramatic New Evidence Uncovered by Astronomers
Planet Lost in the Glare of Binary Stars

This illustration depicts a planet partially hidden in the glare of its host star and a nearby companion star. After examining a number of binary stars, astronomers have concluded that Earth-sized planets in many two-star systems might be going unnoticed by transit searches, which look for changes in the light from a star when a planet passes in front of it. The light from the second star makes it more difficult to detect the changes in the host star’s light when the planet passes in front of it. Credit: International Gemini Observatory/NOIRLab/NSF/AURA/J. da Silva

Astronomers studying stellar pairs uncover evidence that there could be many more Earth-sized planets than previously thought.

Some exoplanet searches could be missing nearly half of the Earth-sized planets around other stars. New findings from a team using the international Gemini Observatory and the WIYN 3.5-meter Telescope at Kitt Peak National Observatory suggest that Earth-sized worlds could be lurking undiscovered in binary star systems, hidden in the glare of their parent stars. As roughly half of all stars are in binary systems, this means that astronomers could be missing many Earth-sized worlds.

Earth-sized planets may be much more common than previously realized. Astronomers working at NASA Ames Research Center have used the twin telescopes of the international Gemini Observatory, a Program of NSF’s NOIRLab, to determine that many planet-hosting stars identified by NASA’s TESS exoplanet-hunting mission[1] are actually pairs of stars — known as binary stars — where the planets orbit one of the stars in the pair. After examining these binary stars, the team has concluded that Earth-sized planets in many two-star systems might be going unnoticed by transit searches like TESS’s, which look for changes in the light from a star when a planet passes in front of it.[2] The light from the second star makes it more difficult to detect the changes in the host star’s light when the planet transits.

The team started out by trying to determine whether some of the exoplanet host stars identified with TESS were actually unknown binary stars. Physical pairs of stars that are close together can be mistaken for single stars unless they are observed at extremely high resolution. So the team turned to both Gemini telescopes to inspect a sample of exoplanet host stars in painstaking detail. Using a technique called speckle imaging,[3] the astronomers set out to see whether they could spot undiscovered stellar companions.

Using the `Alopeke and Zorro instruments on the Gemini North and South telescopes in Chile and Hawai‘i, respectively,[4] the team observed hundreds of nearby stars that TESS had identified as potential exoplanet hosts. They discovered that 73 of these stars are really binary star systems that had appeared as single points of light until observed at higher resolution with Gemini. “With the Gemini Observatory’s 8.1-meter telescopes, we obtained extremely high-resolution images of exoplanet host stars and detected stellar companions at very small separations,” said Katie Lester of NASA’s Ames Research Center, who led this work.

Lester’s team also studied an additional 18 binary stars previously found among the TESS exoplanet hosts using the NN-EXPLORE Exoplanet and Stellar Speckle Imager (NESSI) on the WIYN 3.5-meter Telescope at Kitt Peak National Observatory, also a Program of NSF’s NOIRLab.

After identifying the binary stars, the team compared the sizes of the detected planets in the binary star systems to those in single-star systems. They realized that the TESS spacecraft found both large and small exoplanets orbiting single stars, but only large planets in binary systems.

These results imply that a population of Earth-sized planets could be lurking in binary systems and going undetected using the transit method employed by TESS and many other planet-hunting telescopes. Some scientists had suspected that transit searches might be missing small planets in binary systems, but the new study provides observational support to back it up and shows which sizes of exoplanets are affected.[5]

“We have shown that it is more difficult to find Earth-sized planets in binary systems because small planets get lost in the glare of their two parent stars,” Lester stated. “Their transits are ‘filled in’ by the light from the companion star,” added Steve Howell of NASA’s Ames Research Center, who leads the speckle imaging effort and was involved in this research.

“Since roughly 50% of stars are in binary systems, we could be missing the discovery of — and the chance to study — a lot of Earth-like planets,” Lester concluded.

The possibility of these missing worlds means that astronomers will need to use a variety of observational techniques before concluding that a given binary star system has no Earth-like planets. “Astronomers need to know whether a star is single or binary before they claim that no small planets exist in that system,” explained Lester. “If it’s single, then you could say that no small planets exist. But if the host is in a binary, you wouldn’t know whether a small planet is hidden by the companion star or does not exist at all. You would need more observations with a different technique to figure that out.”

As part of their study, Lester and her colleagues also analyzed how far apart the stars are in the binary systems where TESS had detected large planets. The team found that the stars in the exoplanet-hosting pairs were typically farther apart than binary stars not known to have planets.[6] This could suggest that planets do not form around stars that have close stellar companions.

“This speckle imaging survey illustrates the critical need for NSF telescope facilities to characterize newly discovered planetary systems and develop our understanding of planetary populations,” said National Science Foundation Division of Astronomical Sciences Program Officer Martin Still.

“This is a major finding in exoplanet work,” Howell commented. “The results will help theorists create their models for how planets form and evolve in double-star systems.”

Notes

  1. TESS is the Transiting Exoplanet Survey Satellite, a NASA mission designed to search for planets orbiting other stars in a survey of around 75% of the entire night sky. The mission launched in 2018 and has detected more than 3500 candidate exoplanets, of which more than 130 have been confirmed. The satellite looks for exoplanets by observing their host stars; a transiting exoplanet causes a subtle but measurable dip in the brightness of its host star as it crosses in front of the star and blocks some of its light.
  2. The transit technique is one way of discovering exoplanets. It involves looking for regular decreases in the light of a star that could be caused by a planet passing in front of or “transiting” the star and blocking some of the starlight.
  3. Speckle imaging is an astronomical technique that allows astronomers to see past the blur of the atmosphere by taking many quick observations in rapid succession. By combining these observations, it is possible to cancel out the blurring effect of the atmosphere, which affects ground-based astronomy by causing stars in the night sky to twinkle.
  4. `Alopeke & Zorro are identical imaging instruments permanently mounted on the Gemini North and South telescopes. Their names mean “fox” in Hawaiian and Spanish, respectively, reflecting their respective locations on Maunakea in Hawaiʻi and on Cerro Pachón in Chile.
  5. The team found that planets twice the size of Earth or smaller could not be detected using the transit method when observing binary systems.
  6. Lester’s team found that the exoplanet-hosting binary stars they identified had average separations of about 100 astronomical units. (An astronomical unit is the average distance between the Sun and Earth.) Binary stars that are not known to host planets are typically separated by around 40 astronomical units.
    More information

This research is presented in the paper “Speckle Observations of TESS Exoplanet Host Stars. II. Stellar Companions at 1-1000 AU and Implications for Small Planet Detection” to appear in the Astronomical Journal.

Reference: “Speckle Observations of TESS Exoplanet Host Stars. II. Stellar Companions at 1-1000 AU and Implications for Small Planet Detection” by Kathryn V. Lester, Rachel A. Matson, Steve B. Howell, Elise Furlan, Crystal L. Gnilka, Nicholas J. Scott, David R. Ciardi, Mark E. Everett, Zachary D. Hartman and Lea A. Hirsch, Accepted, Astronomical Journal.
arXiv:2106.13354

The team is composed of Kathryn V. Lester (NASA Ames Research Center), Rachel A. Matson (US Naval Observatory), Steve B. Howell (NASA Ames Research Center), Elise Furlan (Exoplanet Science Institute, Caltech), Crystal L. Gnilka (NASA Ames Research Center), Nicholas J. Scott (NASA Ames Research Center), David R. Ciardi (Exoplanet Science Institute, Caltech), Mark E. Everett (NSF’s NOIRLab), Zachary D. Hartman (Lowell Observatory & Department of Physics & Astronomy, Georgia State University), and Lea A. Hirsch (Kavli Institute for Particle Astrophysics and Cosmology, Stanford University).

NSF’s NOIRLab (National Optical-Infrared Astronomy Research Laboratory), the US center for ground-based optical-infrared astronomy, operates the international Gemini Observatory (a facility of NSF, NRC–Canada, ANID–Chile, MCTIC–Brazil, MINCyT–Argentina, and KASI–Republic of Korea), Kitt Peak National Observatory (KPNO), Cerro Tololo Inter-American Observatory (CTIO), the Community Science and Data Center (CSDC), and Vera C. Rubin Observatory (operated in cooperation with the Department of Energy’s SLAC National Accelerator Laboratory). It is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with NSF and is headquartered in Tucson, Arizona. The astronomical community is honored to have the opportunity to conduct astronomical research on Iolkam Du’ag (Kitt Peak) in Arizona, on Maunakea in Hawai‘i, and on Cerro Tololo and Cerro Pachón in Chile. We recognize and acknowledge the very significant cultural role and reverence that these sites have to the Tohono O’odham Nation, to the Native Hawaiian community, and to the local communities in Chile, respectively.

Evolutionary Unique: The Natural History and Conservation Importance of Elusive Chinese Mountain Cat
Evolutionary Unique: The Natural History and Conservation Importance of Elusive Chinese Mountain Cat
Chinese Mountain Cat Photo

Chinese mountain cat. Credit: Song Dazhao, CFCA

Study highlights the evolutionary uniqueness and premier conservation importance of the elusive Chinese mountain cat.

We know that the domestic cat has distant relatives that roam the earth – lions, tigers, cheetahs, and mountain lions. Less familiar are the 38 distinct species in the Family Felidae, many with strange names like pampas cat, kodkod, and rusty spotted cat. The new field of genomics – the unraveling of DNA genomes of separate species – is resolving old conundrums and revealing new secrets across the history of evolutionarily related species among cats, dogs, bears, and ourselves.

In the largest-ever study undertaken of Chinese cats, genetic detectives highlight the evolutionary uniqueness and premier conservation importance of the elusive Chinese mountain cat (Felis silvestris bieti), found only in the Tibetan plateau of China. Also called Chinese desert cat or Steppe cat, the Chinese mountain cat has a distinctive appearance of sand-colored fur, with faint dark stripes a thick tail, and light blue pupils.

The research is published in Scientific Advances.

This new study compared three different felines living in China: the Chinese mountain cat, Felis silvestris bieti, the Asiatic wildcat Felis silvestris ornata, and feral domestic cats Felis silvestris catus. The Asiatic wildcat has distinguishing spotted coat pattern across a wide range extending from the Caspian Sea in the East through western India and southern Mongolia to parts of western China. Approximately 600 million domestic cats are found across the world.

Chinese Mountain Cat

Chinese mountain cat. Credit: Song Dazhao, CFCA

The study was led by the Laboratory of Genomic Diversity at Peking University in Beijing and supported by an international team including lead genetic researchers at Nova Southeastern University, USA, and in Malaysia. The genomic data resolves a taxonomic classification uncertainty, reveals the timing of evolutionary divergence and pinpoints the prospects for survival of an important endangered species.

Using 270 individual samples, the molecular genetic study finds that the Chinese mountain cat is a unique subspecies of the wide-ranging Wildcat, Felis silvestris. The wildcat species is found throughout Europe, Africa, and much of Western Asia. The Felis silvestris bieti subspecies, however, is found only in China, being adapted to the prey and alpine climate of the Tibetan plateau.

Applying the molecular clock hypotheses, the date of evolutionary split between F. s. bieti and F. s. ornata was an estimated at ~1.5 million years ago while the genetic distance from both to the closest Felis species relative, the black footed cat, Felis nigripes is twice that at 3.0 MY ago. These different times support the classification of F. s. ornata and F. s. bieti as subspecies of Felis silvestris. A closely related subspecies from Central Asia and north Africa, Felis silvestris lybica, is the clear predecessor of the world’s domestic cats, including those throughout China. The cat domestication process happened 10-12,000 years ago in the Near East at around the same time and locale, when humankind ancestors morphed from peripatetic hunter-gatherers to sedentary farmers in the Fertile Crescent region.

The Chinese mountain cat faces several major threats, one from modern agricultural practices that divert precious habitat. A second, more existential threat, is from interbreeding with domestic cats brought by the growing human population in the cat’s limited habitat. And finally, climate change, that may be expanding the range of neighboring wildcats into the mountain cat’s core homeland.

“This study will help conservation scientists to identify threats and decide the best ways to conserve this special cat in its native range,” said Stephen J. O’Brien, Ph.D., a world-renowned geneticist and research scientist at NSU’s Halmos College of Arts and Sciences.

The study solidifies the taxonomic status of the mountain cat, Felis silvestris lybica, through an analysis of the cat’s genome, placing the cat in an evolutionary context relative to other species and subspecies of cats. These arcane taxonomic distinctions are important for conservation because scientists have to be sure they are all talking about the same animal when discussing strategies, and no less important, because legal protections have to be specific to the group in question. Without an agreed-upon taxonomy, legal protections and conservation come to a stop.

Another important result of this study is the finding that domestic cats in China are derived from the same common stock and origin as domestic cats throughout the world, and that there was not an independent origin of domesticity in China. Previous studies have hinted at close associations between early Chinese farming communities and local wild animals, including Asian mountain cats, and that some of these animals may have begun the crossing from the wild to living with people in settled communities.

What the current study shows is that this did not happen with domestic cats; now the focus of research can move to determining – why? Why were some species domesticated in some place but not in others? Why did these processes happen when they did, and what were the conditions obtaining that allowed, maybe even promoted, the integration of wild animals into human societies? Answering these related questions will help us understand the history of early China, indeed helps us understand the history of the ancient anthropocentric world, in more detail.

Reference: “Genomic evidence for the Chinese mountain cat as a wildcat conspecific (Felis silvestris bieti) and its introgression to domestic cats” by He Yu, Yue-Ting Xing, Hao Meng, Bing He, Wen-Jing Li, Xin-Zhang Qi, Jian-You Zhao, Yan Zhuang, Xiao Xu, Nobuyuki Yamaguchi, Carlos A. Driscoll, Stephen J. O’Brien and Shu-Jin Luo, 23 June 2021, Science Advances.
DOI: 10.1126/sciadv.abg0221

Habitable Planets With Earth-Like Biospheres May Be Much Rarer Than Thought
Habitable Planets With Earth-Like Biospheres May Be Much Rarer Than Thought

Habitable Planet Earth-Like Biosphere

A new analysis of known exoplanets has revealed that Earth-like conditions on potentially habitable planets may be much rarer than previously thought. The work focuses on the conditions required for oxygen-based photosynthesis to develop on a planet, which would enable complex biospheres of the type found on Earth. The study was recently published in the Monthly Notices of the Royal Astronomical Society.

The number of confirmed planets in our own Milky Way galaxy now numbers into the thousands. However, planets that are both Earth-like and in the habitable zone — the region around a star where the temperature is just right for liquid water to exist on the surface — are much less common.

At the moment, only a handful of such rocky and potentially habitable exoplanets are known. However the new research indicates that none of these has the theoretical conditions to sustain an Earth-like biosphere by means of ‘oxygenic’ photosynthesis — the mechanism plants on Earth use to convert light and carbon dioxide into oxygen and nutrients.

Only one of those planets comes close to receiving the stellar radiation necessary to sustain a large biosphere: Kepler-442b, a rocky planet about twice the mass of the Earth, orbiting a moderately hot star around 1,200 light-years away.

Kepler 422-b Compared With Earth

An artistic representation of the potentially habitable planet Kepler 422-b (left), compared with Earth (right). Credit: Ph03nix1986 / Wikimedia Commons

The study looked in detail at how much energy is received by a planet from its host star, and whether living organisms would be able to efficiently produce nutrients and molecular oxygen, both essential elements for complex life as we know it, via normal oxygenic photosynthesis.

By calculating the amount of photosynthetically active radiation (PAR) that a planet receives from its star, the team discovered that stars around half the temperature of our Sun cannot sustain Earth-like biospheres because they do not provide enough energy in the correct wavelength range. Oxygenic photosynthesis would still be possible, but such planets could not sustain a rich biosphere.

Planets around even cooler stars known as red dwarfs, which smolder at roughly a third of our Sun’s temperature, could not receive enough energy to even activate photosynthesis. Stars that are hotter than our Sun are much brighter, and emit up to ten times more radiation in the necessary range for effective photosynthesis than red dwarfs, however generally do not live long enough for complex life to evolve.

“Since red dwarfs are by far the most common type of star in our galaxy, this result indicates that Earth-like conditions on other planets may be much less common than we might hope,” comments Prof. Giovanni Covone of the University of Naples, lead author of the study.

He adds: “This study puts strong constraints on the parameter space for complex life, so unfortunately it appears that the “sweet spot” for hosting a rich Earth-like biosphere is not so wide.”

Future missions such as the James Webb Space Telescope (JWST), due for launch later this year, will have the sensitivity to look to distant worlds around other stars and shed new light on what it really takes for a planet to host life as we know it.

Reference: “Efficiency of the oxygenic photosynthesis on Earth-like planets in the habitable zone” by Giovanni Covone, Riccardo M Ienco, Luca Cacciapuoti and Laura Inno, 19 May 2021, Monthly Notices of the Royal Astronomical Society.
DOI: 10.1093/mnras/stab1357

Inkjet Printing
Inkjet Printing “Impossible Materials” – Bend Light, Manipulate Energy, or Have Chameleon-Like Abilities
Microwave Resonator Metameterial

A thin film polymer tunes the properties of an inkjet printed array of small microwave resonators. The composite device can be tuned to capture or transmit different wavelengths of microwave energy. Credit: Fio Omenetto, Tufts University

Engineers develop inexpensive, scalable method to make metamaterials that manipulate microwave energy in ways conventional materials cannot.

Engineers at Tufts University have developed new methods to more efficiently fabricate materials that behave in unusual ways when interacting with microwave energy, with potential implications for telecommunications, GPS, radar, mobile devices, and medical devices. Known as metamaterials, they are sometimes referred to as “impossible materials” because they could, in theory, bend energy around objects to make them appear invisible, concentrate the transmission of energy into focused beams, or have chameleon-like abilities to reconfigure their absorption or transmission of different frequency ranges.

The innovation, described today in Nature Electronics, constructs the metamaterials using low-cost inkjet printing, making the method widely accessible and scalable while also providing benefits such as the ability to be applied to large conformable surfaces or interface with a biological environment. It is also the first demonstration that organic polymers can be used to electrically “tune” the properties of the metamaterials.

Electromagnetic metamaterials and meta-surfaces — their two-dimensional counterparts — are composite structures that interact with electromagnetic waves in peculiar ways. The materials are composed of tiny structures — smaller than the wavelengths of the energy they influence — carefully arranged in repeating patterns. The ordered structures display unique wave interaction capabilities that enable the design of unconventional mirrors, lenses and filters able to either block, enhance, reflect, transmit, or bend waves beyond the possibilities offered by conventional materials.

The Tufts engineers fabricated their metamaterials by using conducting polymers as a substrate, then inkjet printing specific patterns of electrodes to create microwave resonators. Resonators are important components used in communications devices that can help filter select frequencies of energy that are either absorbed or transmitted. The printed devices can be electrically tuned to adjust the range of frequencies that the modulators can filter.

Metamaterial devices operating in the microwave spectrum could have widespread applications to telecommunications, GPS, radar, and mobile devices, where metamaterials can significantly boost their signal sensitivity and transmission power. The metamaterials produced in the study could also be applied to medical device communications because the biocompatible nature of the thin film organic polymer could enable the incorporation of enzyme-coupled sensors, while its inherent flexibility could permit devices to be fashioned into conformable surfaces appropriate for use on or in the body.

“We demonstrated the ability to electrically tune the properties of meta-surfaces and meta-devices operating in the microwave region of the electromagnetic spectrum,” said Fiorenzo Omenetto, Frank C. Doble Professor of Engineering at Tufts University School of Engineering, director of the Tufts Silklab where the materials were created, and corresponding author of the study. “Our work represents a promising step compared to current meta-device technologies, which largely depend on complex and costly materials and fabrication processes.”

The tuning strategy developed by the research team relies entirely on thin-film materials that can be processed and deposited through mass-scalable techniques, such as printing and coating, on a variety of substrates. The ability to tune the electrical properties of the substrate polymers enabled the authors to operate the devices within a much wider range of microwave energies and up to higher frequencies (5 GHz) than was assumed to be possible with conventional non-meta materials (<0.1GHz).

Development of metamaterials for visible light, which has nanometer scale wavelength, is still in its early stages due to the technical challenges of making tiny arrays of substructures at that scale, but metamaterials for microwave energy, which has centimeter-scale wavelengths, are more amenable to the resolution of common fabrication methods. The authors suggest that the fabrication method they describe using inkjet printing and other forms of deposition on thin film conducting polymers could begin to test the limits of metamaterials operating at higher frequencies of the electromagnetic spectrum.

“This research is, potentially, only the beginning,” said Giorgio Bonacchini former post-doctoral fellow in Omenetto’s lab, now at Stanford University, and first author of the study. “Hopefully, our proof-of-concept device will encourage further explorations of how organic electronic materials and devices can be successfully used in reconfigurable metamaterials and meta-surfaces across the entire electromagnetic spectrum.”

Reference: “Reconfigurable microwave metadevices based on organic electrochemical transistors” by Giorgio E. Bonacchini and Fiorenzo G. Omenetto, 21 June 2021, Nature Electronics.
DOI: 10.1038/s41928-021-00590-0

High Capacity DNA Data Storage: Could All Your Digital Photos Be Stored As DNA?
High Capacity DNA Data Storage: Could All Your Digital Photos Be Stored As DNA?
DNA Data Storage

MIT biological engineers have demonstrated a way to easily retrieve data files stored as DNA. This could be a step toward using DNA archives to store enormous quantities of photos, images, and other digital content. Credit: Image: MIT News. Small icons courtesy of the researchers

A technique for labeling and retrieving DNA data files from a large pool could help make DNA data storage feasible.

On Earth right now, there are about 10 trillion gigabytes of digital data, and every day, humans produce emails, photos, tweets, and other digital files that add up to another 2.5 million gigabytes of data. Much of this data is stored in enormous facilities known as exabyte data centers (an exabyte is 1 billion gigabytes), which can be the size of several football fields and cost around $1 billion to build and maintain.

Many scientists believe that an alternative solution lies in the molecule that contains our genetic information: DNA, which evolved to store massive quantities of information at very high density. A coffee mug full of DNA could theoretically store all of the world’s data, says Mark Bathe, an MIT professor of biological engineering.

“We need new solutions for storing these massive amounts of data that the world is accumulating, especially the archival data,” says Bathe, who is also an associate member of the Broad Institute of MIT and Harvard. “DNA is a thousandfold denser than even flash memory, and another property that’s interesting is that once you make the DNA polymer, it doesn’t consume any energy. You can write the DNA and then store it forever.”

DNA Files Photo

A photo of the DNA “files.” Each silica sphere contains DNA sequences that encode a particular image, and the outside of the sphere is coated with nucleotide barcodes that describe the image contents. Credit: Courtesy of the researchers

Scientists have already demonstrated that they can encode images and pages of text as DNA. However, an easy way to pick out the desired file from a mixture of many pieces of DNA will also be needed. Bathe and his colleagues have now demonstrated one way to do that, by encapsulating each data file into a 6-micrometer particle of silica, which is labeled with short DNA sequences that reveal the contents.

Using this approach, the researchers demonstrated that they could accurately pull out individual images stored as DNA sequences from a set of 20 images. Given the number of possible labels that could be used, this approach could scale up to 1020 files.

Bathe is the senior author of the study, which appears today in Nature Materials. The lead authors of the paper are MIT senior postdoc James Banal, former MIT research associate Tyson Shepherd, and MIT graduate student Joseph Berleant.

Stable storage

Digital storage systems encode text, photos, or any other kind of information as a series of 0s and 1s. This same information can be encoded in DNA using the four nucleotides that make up the genetic code: A, T, G, and C. For example, G and C could be used to represent 0 while A and T represent 1.

DNA has several other features that make it desirable as a storage medium: It is extremely stable, and it is fairly easy (but expensive) to synthesize and sequence. Also, because of its high density — each nucleotide, equivalent to up to two bits, is about 1 cubic nanometer — an exabyte of data stored as DNA could fit in the palm of your hand.

Images Stored in DNA

The researchers stored images like these, pictured, in DNA. Credit: Courtesy of the researchers

One obstacle to this kind of data storage is the cost of synthesizing such large amounts of DNA. Currently it would cost $1 trillion to write one petabyte of data (1 million gigabytes). To become competitive with magnetic tape, which is often used to store archival data, Bathe estimates that the cost of DNA synthesis would need to drop by about six orders of magnitude. Bathe says he anticipates that will happen within a decade or two, similar to how the cost of storing information on flash drives has dropped dramatically over the past couple of decades.

Aside from the cost, the other major bottleneck in using DNA to store data is the difficulty in picking out the file you want from all the others.

“Assuming that the technologies for writing DNA get to a point where it’s cost-effective to write an exabyte or zettabyte of data in DNA, then what? You’re going to have a pile of DNA, which is a gazillion files, images or movies and other stuff, and you need to find the one picture or movie you’re looking for,” Bathe says. “It’s like trying to find a needle in a haystack.”

Currently, DNA files are conventionally retrieved using PCR (polymerase chain reaction). Each DNA data file includes a sequence that binds to a particular PCR primer. To pull out a specific file, that primer is added to the sample to find and amplify the desired sequence. However, one drawback to this approach is that there can be crosstalk between the primer and off-target DNA sequences, leading unwanted files to be pulled out. Also, the PCR retrieval process requires enzymes and ends up consuming most of the DNA that was in the pool.

“You’re kind of burning the haystack to find the needle, because all the other DNA is not getting amplified and you’re basically throwing it away,” Bathe says.

File retrieval

As an alternative approach, the MIT team developed a new retrieval technique that involves encapsulating each DNA file into a small silica particle. Each capsule is labeled with single-stranded DNA “barcodes” that correspond to the contents of the file. To demonstrate this approach in a cost-effective manner, the researchers encoded 20 different images into pieces of DNA about 3,000 nucleotides long, which is equivalent to about 100 bytes. (They also showed that the capsules could fit DNA files up to a gigabyte in size.)

Each file was labeled with barcodes corresponding to labels such as “cat” or “airplane.” When the researchers want to pull out a specific image, they remove a sample of the DNA and add primers that correspond to the labels they’re looking for — for example, “cat,” “orange,” and “wild” for an image of a tiger, or “cat,” “orange,” and “domestic” for a housecat.

The primers are labeled with fluorescent or magnetic particles, making it easy to pull out and identify any matches from the sample. This allows the desired file to be removed while leaving the rest of the DNA intact to be put back into storage. Their retrieval process allows Boolean logic statements such as “president AND 18th century” to generate George Washington as a result, similar to what is retrieved with a Google image search.

“At the current state of our proof-of-concept, we’re at the 1 kilobyte per second search rate. Our file system’s search rate is determined by the data size per capsule, which is currently limited by the prohibitive cost to write even 100 megabytes worth of data on DNA, and the number of sorters we can use in parallel. If DNA synthesis becomes cheap enough, we would be able to maximize the data size we can store per file with our approach,” Banal says.

For their barcodes, the researchers used single-stranded DNA sequences from a library of 100,000 sequences, each about 25 nucleotides long, developed by Stephen Elledge, a professor of genetics and medicine at Harvard Medical School. If you put two of these labels on each file, you can uniquely label 1010 (10 billion) different files, and with four labels on each, you can uniquely label 1020 files.

George Church, a professor of genetics at Harvard Medical School, describes the technique as “a giant leap for knowledge management and search tech.”

“The rapid progress in writing, copying, reading, and low-energy archival data storage in DNA form has left poorly explored opportunities for precise retrieval of data files from huge (1021 byte, zetta-scale) databases,” says Church, who was not involved in the study. “The new study spectacularly addresses this using a completely independent outer layer of DNA and leveraging different properties of DNA (hybridization rather than sequencing), and moreover, using existing instruments and chemistries.”

Bathe envisions that this kind of DNA encapsulation could be useful for storing “cold” data, that is, data that is kept in an archive and not accessed very often. His lab is spinning out a startup, Cache DNA, that is now developing technology for long-term storage of DNA, both for DNA data storage in the long-term, and clinical and other preexisting DNA samples in the near-term.

“While it may be a while before DNA is viable as a data storage medium, there already exists a pressing need today for low-cost, massive storage solutions for preexisting DNA and RNA samples from Covid-19 testing, human genomic sequencing, and other areas of genomics,” Bathe says.

Reference: “Random access DNA memory using Boolean search in an archival file storage system” by James L. Banal, Tyson R. Shepherd, Joseph Berleant, Hellen Huang, Miguel Reyes, Cheri M. Ackerman, Paul C. Blainey and Mark Bathe, 10 June 2021, Nature Materials.
DOI: 10.1038/s41563-021-01021-3

The research was funded by the Office of Naval Research, the National Science Foundation, and the U.S. Army Research Office.

Toward Safer Breast Implants: How Implant Surfaces Affect Immune Response
Toward Safer Breast Implants: How Implant Surfaces Affect Immune Response

Rice University bioengineer Omid Veiseh shows silicone breast implants with rough (left) and smooth surfaces. Credit: Photo by Jeff Fitlow/Rice University

Six-year effort includes researchers from Rice, MD Anderson, Baylor College of Medicine.

Rice University bioengineers collaborated on a six-year study that systematically analyzed how the surface architecture of breast implants influences the development of adverse effects, including an unusual type of lymphoma.

Every year, about 400,000 people receive silicone breast implants in the United States. According to FDA data, most of those implants need to be replaced within 10 years due to the buildup of scar tissue and other complications.

A team including researchers from the Massachusetts Institute of Technology (MIT), Rice, the University of Texas MD Anderson Cancer Center and Baylor College of Medicine published its findings online today in Nature Biomedical Engineering.

Rice University bioengineers (from left) Amanda Nash, Omid Veiseh and Samira Aghlara-Fotovat and collaborators systematically analyzed how the surface roughness of silicone breast implants influenced the development of adverse effects, which in rare cases can include an unusual type of lymphoma. Credit: Photo by Jeff Fitlow/Rice University

“The surface topography of an implant can drastically affect how the immune response perceives it, and this has important ramifications for the [implants’] design,” said Omid Veiseh, an assistant professor of bioengineering at Rice who began the research six years ago during a postdoctoral fellowship at MIT. “We hope this paper provides a foundation for plastic surgeons to evaluate and better understand how implant choice can affect the patient experience.”

The findings, were co-authored by two dozen researchers, including co-lead authors Veiseh and Joshua Doloff of Johns Hopkins University, MIT’s Robert Langer and two of Veiseh’s collaborators from the Texas Medical Center, Baylor’s Courtney Hodges and MD Anderson’s Mark Clemens.

Human impact

Veiseh, whose lab focuses on developing and studying biocompatible materials, said he is particularly excited about the discovery that surface architecture can be tuned to reduce host immune responses and fibrosis to breast implants.

“There’s a lot we still don’t understand about how the immune system orchestrates its response to implants, and it is really important to understand that within the context of biomaterials,” Veiseh said.

Rice University bioengineering graduate students Samira Aghlara-Fotovat (left) and Amanda Nash worked with collaborators at Baylor College of Medicine and the University of Texas MD Anderson Cancer Center to correlate findings from MIT animal studies with clinical data from human patients. Credit: Photo by Jeff Fitlow/Rice University

Veiseh continued the research after joining Rice’s faculty in 2017 as a CPRIT Scholar from the Cancer Prevention and Research Institute of Texas. He and two Ph.D. students from his lab, Amanda Nash and Samira Aghlara-Fotovat, collaborated on the project with the research groups of MD Anderson’s Clemens and Baylor’s Hodges to correlate findings from MIT animal studies with clinical data from human patients.

“Clinically, we have observed that patients exposed to textured-surface breast implants can develop breast implant-associated large cell lymphoma (BIA-ALCL), but this has not occurred with smooth-surface implants,” said Clemens, an associate professor of plastic surgery at MD Anderson who leads a multidisciplinary treatment team on the disease. “This paper gives important novel insights into cancer pathogenesis with clear implications for preventing disease before it develops.”

Veiseh said the work also provided important clues that will guide follow-up studies.

“That’s the most exciting part of this: it could lead to safer, more compatible biomaterials and implant designs,” Veiseh said.

Surface analysis

Silicone breast implants have been in use since the 1960s. The earliest versions had smooth surfaces, but patients with these implants often experienced a complication called capsular contracture, in which scar tissue formed around the implant and squeezed it, creating pain or discomfort as well as visible deformities. Implants could also flip after implantation, requiring surgical adjustment or removal.

In the late 1980s, some companies introduced rougher surfaces intended to reduce capsular contracture rates and make implants stay in place. The textured surfaces have peaks of varying heights. The peaks of some average hundreds of microns.

Rice University bioengineering graduate student Samira Aghlara-Fotovat holding miniature implants similar to those used in animal studies that explored how the immune system responds to a variety of breast implant surface textures. Credit: Photo by Jeff Fitlow/Rice University

In 2019, the FDA requested breast implant manufacturer Allergan recall highly textured breast implants that had an average surface roughness of about 80 microns due to risk of BIA-ALCL, a cancer of the immune system.

In 2015, Veiseh and Doloff, then postdocs in the lab of MIT’s Langer, began testing five commercially available implants with different surface designs to see how they would interact with surrounding tissue and the immune system. These included the highly textured one that had been previously recalled, one that was completely smooth and three that were somewhere in between.

Experimental results

In a study of rabbits, the researchers found tissue exposed to the more heavily textured implant surfaces showed signs of increased activity from macrophages — immune cells that normally clear out foreign cells and debris.

All of the implants stimulated immune cells called T cells, but in different ways. The study found implants with rougher surfaces stimulated more pro-inflammatory T cell responses. Among the non-smooth implants, those with the smallest degree of roughness (4 microns) stimulated T cells that appeared to inhibit tissue inflammation.

The findings suggest that rougher implants rub against surrounding tissue and cause more irritation. This may explain why the rougher implants can lead to lymphoma: The hypothesis is that some of the texture sloughs off and gets trapped in nearby tissue, where it provokes chronic inflammation that can eventually lead to cancer.

The researchers also tested miniaturized versions of implants in mice. They manufactured these implants using the same techniques used to manufacture human-sized versions, and showed that more highly textured implants provoked more macrophage activity, more scar tissue formation and higher levels of inflammatory T cells. The researchers worked with the Hodges’ lab at Baylor to perform single-cell RNA sequencing of immune cells from these tissues to uncover the specific signals that made the immune cells more inflammatory.

“The surface properties of the implants have profoundly different effects on key signals between immune cells that help recognize and respond to foreign materials,” said Hodges, an assistant professor of molecular and cellular biology at Baylor. “The results show that the lightly textured surface avoided the strong negative cytokine immune response induced by the rough surface.”

Toward safer implants

After their animal studies, the researchers examined how human patients respond to different types of silicone breast implants by collaborating with MD Anderson on the analysis of tissue samples from BIA-ALCL patients.

They found evidence of the same types of immune responses observed in the animal studies. For example, they observed that tissue samples from patients that had been host to highly textured implants for many years showed signs of a chronic, long-term immune response. They also found that scar tissue was thicker in patients who had more highly textured implants.

“Doing across-the-board comparisons in mice, rabbits and then in human [tissue samples] really provides a much more robust and substantial body of evidence about how these compare to one another,” Veiseh said.

The authors said they hope their datasets will help other researchers optimize the design of silicone breast implants and other types of medical silicone implants for better safety.

“We are pleased that we were able to bring new materials science approaches to better understand issues of biocompatibility in the area of breast implants,” said Langer, the study’s senior author and MIT’s David H. Koch Institute Professor. “We also hope the studies that we conducted will be broadly useful in understanding how to design safer and more effective implants of any type.”

Reference: “The surface topography of silicone breast implants mediates the foreign body response in mice, rabbits and humans” by Joshua C. Doloff, Omid Veiseh, Roberto de Mezerville, Marcos Sforza, Tracy Ann Perry, Jennifer Haupt, Morgan Jamiel, Courtney Chambers, Amanda Nash, Samira Aghlara-Fotovat, Jessica L. Stelzel, Stuart J. Bauer, Sarah Y. Neshat, John Hancock, Natalia Araujo Romero, Yessica Elizondo Hidalgo, Isaac Mora Leiva, Alexandre Mendonça Munhoz, Ardeshir Bayat, Brian M. Kinney, H. Courtney Hodges, Roberto N. Miranda, Mark W. Clemens and Robert Langer, 21 June 2021, Nature Biomedical Engineering.
DOI: 10.1038/s41551-021-00739-4

Additional study co-authors include Jennifer Haupt and Morgan Jamiel of MIT; Jessica Stelzel, Stuart Bauer and Sara Neshat of Johns Hopkins; Roberto De Mezerville, Tracy Ann Perry, John Hancock, Natalia Araujo Romero, Yessica Elizondo and Isaac Mora Leiva of Establishment Labs; Courtney Chambers of Baylor; Roberto Miranda of MD Anderson; Marcos Sforza of Dolan Park Hospital in the United Kingdom; Alexandre Mendonca Munhoz of the University of São Paulo; Ardeshir Bayat of the University of Manchester; and Brian Kinney of the University of Southern California.

The research was funded by Establishment Labs. Langer, Hancock, Bayat, Kinney and Perry are members of the scientific advisory board of Establishment Labs and hold equity in the company. Sforza and Munhoz are members of the medical advisory board of Establishment Labs and hold equity in the company. Clemens is an investigator on the US IDE Clinical Trial for the Study of Safety and Effectiveness of Motiva Implants. De Mezerville, Romero, Elizondo and Leiva are employees of Establishment Labs and hold equity in the company. Doloff and Veiseh are paid Establishment Labs consultants.

NASA's Webb Telescope Will Look Back in Time, Use Quasars to Unlock the Secrets of the Early Universe
NASA’s Webb Telescope Will Look Back in Time, Use Quasars to Unlock the Secrets of the Early Universe
Galaxy With Brilliant Quasar

This is an artist’s concept of a galaxy with a brilliant quasar at its center. A quasar is a very bright, distant and active supermassive black hole that is millions to billions of times the mass of the Sun. Among the brightest objects in the universe, a quasar’s light outshines that of all the stars in its host galaxy combined. Quasars feed on infalling matter and unleash torrents of winds and radiation, shaping the galaxies in which they reside. Using the unique capabilities of Webb, scientists will study six of the most distant and luminous quasars in the universe. Credit: NASA, ESA and J. Olmsted (STScI)

Looking back in time, Webb will see quasars as they appeared billions of years ago

Outshining all the stars in their host galaxies combined, quasars are among the brightest objects in the universe. These brilliant, distant and active supermassive black holes shape the galaxies in which they reside. Shortly after its launch, scientists will use Webb to study six of the most far-flung and luminous quasars, along with their host galaxies, in the very young universe. They will examine what part quasars play in galaxy evolution during these early times. The team will also use the quasars to study the gas in the space between galaxies in the infant universe. Only with Webb’s extreme sensitivity to low levels of light and its superb angular resolution will this be possible.

Quasars are very bright, distant and active supermassive black holes that are millions to billions of times the mass of the Sun. Typically located at the centers of galaxies, they feed on infalling matter and unleash fantastic torrents of radiation. Among the brightest objects in the universe, a quasar’s light outshines that of all the stars in its host galaxy combined, and its jets and winds shape the galaxy in which it resides.

Shortly after its launch later this year, a team of scientists will train NASA’s James Webb Space Telescope on six of the most distant and luminous quasars. They will study the properties of these quasars and their host galaxies, and how they are interconnected during the first stages of galaxy evolution in the very early universe. The team will also use the quasars to examine the gas in the space between galaxies, particularly during the period of cosmic reionization, which ended when the universe was very young. They will accomplish this using Webb’s extreme sensitivity to low levels of light and its superb angular resolution.

Cosmic Reionization Infographic Crop

(Click image to see full infographic.) More than 13 billion years ago, during the Era of Reionization, the universe was a very different place. The gas between galaxies was largely opaque to energetic light, making it difficult to observe young galaxies. What allowed the universe to become completely ionized, or transparent, eventually leading to the “clear” conditions detected in much of the universe today? The James Webb Space Telescope will peer deep into space to gather more information about objects that existed during the Era of Reionization to help us understand this major transition in the history of the universe. Credit: NASA, ESA, and J. Kang (STScI)

Webb: Visiting the Young Universe

As Webb peers deep into the universe, it will actually look back in time. Light from these distant quasars began its journey to Webb when the universe was very young and took billions of years to arrive. We will see things as they were long ago, not as they are today.

“All these quasars we are studying existed very early, when the universe was less than 800 million years old, or less than 6 percent of its current age. So these observations give us the opportunity to study galaxy evolution and supermassive black hole formation and evolution at these very early times,” explained team member Santiago Arribas, a research professor at the Department of Astrophysics of the Center for Astrobiology in Madrid, Spain. Arribas is also a member of Webb’s Near-Infrared Spectrograph (NIRSpec) Instrument Science Team.

What is Cosmological Redshift Crop

(Click image to see full infographic.) The universe is expanding, and that expansion stretches light traveling through space in a phenomenon known as cosmological redshift. The greater the redshift, the greater the distance the light has traveled. As a result, telescopes with infrared detectors are needed to see light from the first, most distant galaxies. Credit: NASA, ESA, AND L. Hustak (STSci)

The light from these very distant objects has been stretched by the expansion of space. This is known as cosmological redshift. The farther the light has to travel, the more it is redshifted. In fact, the visible light emitted at the early universe is stretched so dramatically that it is shifted out into the infrared when it arrives to us. With its suite of infrared-tuned instruments, Webb is uniquely suited to studying this kind of light.

Studying Quasars, Their Host Galaxies and Environments, and Their Powerful Outflows

The quasars the team will study are not only among the most distant in the universe, but also among the brightest. These quasars typically have the highest black hole masses, and they also have the highest accretion rates — the rates at which material falls into the black holes.

“We’re interested in observing the most luminous quasars because the very high amount of energy that they’re generating down at their cores should lead to the largest impact on the host galaxy by the mechanisms such as quasar outflow and heating,” said Chris Willott, a research scientist at the Herzberg Astronomy and Astrophysics Research Centre of the National Research Council of Canada (NRC) in Victoria, British Columbia. Willott is also the Canadian Space Agency’s Webb project scientist. “We want to observe these quasars at the moment when they’re having the largest impact on their host galaxies.”

An enormous amount of energy is liberated when matter is accreted by the supermassive black hole. This energy heats and pushes the surrounding gas outward, generating strong outflows that tear across interstellar space like a tsunami, wreaking havoc on the host galaxy.

Watch as the jets and winds from a supermassive black hole affect its host galaxy—and the space hundreds of thousands of light-years away over millions of years. Credit: NASA, ESA, and L. Hustak (STScI)

Outflows play an important role in galaxy evolution. Gas fuels the formation of stars, so when gas is removed due to outflows, the star-formation rate decreases. In some cases, outflows are so powerful and expel such large amounts of gas that they can completely halt star formation within the host galaxy. Scientists also think that outflows are the main mechanism by which gas, dust and elements are redistributed over large distances within the galaxy or can even be expelled into the space between galaxies – the intergalactic medium. This may provoke fundamental changes in the properties of both the host galaxy and the intergalactic medium.

Examining Properties of Intergalactic Space During the Era of Reionization

More than 13 billion years ago, when the universe was very young, the view was far from clear. Neutral gas between galaxies made the universe opaque to some types of light. Over hundreds of millions of years, the neutral gas in the intergalactic medium became charged or ionized, making it transparent to ultraviolet light. This period is called the Era of Reionization. But what led to the reionization that created the “clear” conditions detected in much of the universe today? Webb will peer deep into space to gather more information about this major transition in the history of the universe. The observations will help us understand the Era of Reionization, which is one of the key frontiers in astrophysics.

The team will use quasars as background light sources to study the gas between us and the quasar. That gas absorbs the quasar’s light at specific wavelengths. Through a technique called imaging spectroscopy, they will look for absorption lines in the intervening gas. The brighter the quasar is, the stronger those absorption line features will be in the spectrum. By determining whether the gas is neutral or ionized, scientists will learn how neutral the universe is and how much of this reionization process has occurred at that particular point in time.

The James Webb Space Telescope will use an innovative instrument called an integral field unit (IFU) to capture images and spectra at the same time. This video gives a basic overview of how the IFU works. Credit: NASA, ESA, CSA, and L. Hustak (STScI)

“If you want to study the universe, you need very bright background sources. A quasar is the perfect object in the distant universe, because it’s luminous enough that we can see it very well,” said team member Camilla Pacifici, who is affiliated with the Canadian Space Agency but works as an instrument scientist at the Space Telescope Science Institute in Baltimore. “We want to study the early universe because the universe evolves, and we want to know how it got started.”

The team will analyze the light coming from the quasars with NIRSpec to look for what astronomers call “metals,” which are elements heavier than hydrogen and helium. These elements were formed in the first stars and the first galaxies and expelled by outflows. The gas moves out of the galaxies it was originally in and into the intergalactic medium. The team plans to measure the generation of these first “metals,” as well as the way they’re being pushed out into the intergalactic medium by these early outflows.

The Power of Webb

Webb is an extremely sensitive telescope able to detect very low levels of light. This is important, because even though the quasars are intrinsically very bright, the ones this team is going to observe are among the most distant objects in the universe. In fact, they are so distant that the signals Webb will receive are very, very low. Only with Webb’s exquisite sensitivity can this science be accomplished. Webb also provides excellent angular resolution, making it possible to disentangle the light of the quasar from its host galaxy.

The quasar programs described here are Guaranteed Time Observations involving the spectroscopic capabilities of NIRSpec.

The James Webb Space Telescope will be the world’s premier space science observatory when it launches in 2021. Webb will solve mysteries in our solar system, look beyond to distant worlds around other stars, and probe the mysterious structures and origins of our universe and our place in it. Webb is an international program led by NASA with its partners, ESA (European Space Agency) and the Canadian Space Agency.