Evolving Periodicity: Starlight, Star Bright…As Explained by Math
Evolving Periodicity: Starlight, Star Bright…As Explained by Math

A newly developed method mathematically describes periodic changes in the brightness of stars. The model can also be applied to similar variable phenomena such as climatology and solar irradiance. Credit: © 2021 Morgan Bennett Smith

The evolving periodicity of the brightness of certain types of stars can now be described mathematically.

Not all stars shine brightly all the time. Some have a brightness that changes rhythmically due to cyclical phenomena like passing planets or the tug of other stars. Others show a slow change in this periodicity over time that can be difficult to discern or capture mathematically. KAUST’s Soumya Das and Marc Genton have now developed a method to bring this evolving periodicity within the framework of mathematically “cyclostationary” processes.

“It can be difficult to explain the variations of the brightness of variable stars unless they follow a regular pattern over time,” says Das. “In this study we created methods that can explain the evolution of the brightness of a variable star, even if it departs from strict periodicity or constant amplitude.”

Classic cyclostationary processes have an easily definable variation over time, like the sweep of a lighthouse beam or the annual variation in solar irradiance at a given location. Here, “stationary” refers to the constant nature of the periodicity over time and describes highly predictable processes like a rotating shaft or a lighthouse beam. However, when the period or amplitude changes slowly over many cycles, the mathematics for cyclostationary processes fails.

The team applied their method to model the light emitted from the variable star R Hydrae, which exhibited a slowing of its period from 420 to 380 days between 1900 and 1950. Credit: © 2021 Morgan Bennett Smith

“We call such a process an evolving period and amplitude cyclostationary, or EPACS, process,” says Das. “Since EPACS processes are more flexible than cyclostationary processes, they can be used to model a wide variety of real-life scenarios.”

Das and Genton modeled the nonstationary period and amplitude by defining them as functions that vary over time. In doing this, they expanded the definition of a cyclostationary process to better describe the relationship among variables, such as the brightness and periodic cycle for a variable star. They then used an iterative approach to refine key parameters in order to fit the model to the observed process.

“We applied our method to model the light emitted from the variable star R Hydrae, which exhibited a slowing of its period from 420 to 380 days between 1900 and 1950,” says Das. “Our approach showed that R Hydrae has an evolving period and amplitude correlation structure that was not captured in previous work.”

Importantly, because this approach links EPACS processes back to classical cyclostationary theory, then fitting an EPACS process makes it possible to use existing methods for cyclostationary processes.

“Our method can also be applied to similar phenomena other than variable stars, such as climatology and environmetrics, and particularly for solar irradiance, which could be useful for predicting energy harvesting in Saudi Arabia,” Das says.

Reference: “Cyclostationary Processes With Evolving Periods and Amplitudes” by Soumya Das and Marc G. Genton, 4 February 2021, IEEE Transactions on Signal Processing.
DOI: 10.1109/TSP.2021.3057268

KiNET-X Experiment: NASA Launches Rocket in Search of Aurora Answers
KiNET-X Experiment: NASA Launches Rocket in Search of Aurora Answers

The NASA Black Brant XII rocket lifts off with the KiNET-X experiment at Wallops Flight Facility in Virginia, on Sunday, May 16, 2021. Credit: NASA Wallops/Terry Zaperach

NASA launched one of its largest sounding rockets Sunday from an East Coast facility in an experiment led by a University of Alaska Fairbanks Geophysical Institute space physics professor.

The four-stage Black Brant XII rocket carrying the KiNET-X experiment of principal investigator Peter Delamere lifted off from NASA’s Wallops Flight Facility in Virginia at 8:44 p.m. Eastern time. The ascent of the rocket, which flew on an arc into the ionosphere before beginning its planned descent over the Atlantic Ocean near Bermuda, could be seen along the East Coast.

The experiment seeks to understand how a large mass of plasma such as the solar wind interacts at the particle level with, for example, the plasma of Earth’s space environment.

The interaction between the solar wind and a planet’s magnetosphere appears as the aurora, whether here on Earth or on another planet that has a magnetic field and a substantial atmosphere. Physicists have long been trying to understand how the interaction works.

“KiNET-X was a fantastic success, as the Wallops and science teams worked through unprecedented pandemic-related challenges,” Delamere said. “Hats off to all involved. We couldn’t have asked for a better outcome tonight.”

Glowing clouds created by the release of barium into the ionosphere. Credit: Don Hampton, University of Alaska Fairbanks Geophysical Institute

The rocket released two canisters of barium thermite, which were then detonated — one at about 249 miles high and one 90 seconds later on the downward trajectory at about 186 miles, near Bermuda in the North Atlantic Ocean. The detonations produced purple and green clouds.

The barium, once dispersed from the canisters, turned into a plasma when it became ionized by the sunlight. The barium plasma clouds, which generated their own electromagnetic fields and waves, then interacted with the existing plasma of the ionosphere.

The experiment’s science team has already begun analyzing the data from that interaction.

The launch came on the final day of the 10-day launch window. Previous days had been plagued by bad weather at NASA’s Wallops Flight Facility and in Bermuda, unacceptably high winds at upper elevations, and an incident in which the rocket “came in contact with a launcher support during launch preparations,” according to NASA.

The experiment included three other Geophysical Institute space and plasma scientists: Project co-investigator Don Hampton, a Geophysical Institute research associate professor, was in Bermuda for ground observations; Geophysical Institute researchers Mark Conde, a space physics professor, and Antonius Otto, an emeritus professor of plasma physics, monitored the experiment from Fairbanks.

Two UAF students pursuing Ph.Ds at the Geophysical Institute also participated. Matthew Blandin supported optical operations at Wallops Flight Facility, and Kylee Branning was at Langley Air Research Center operating cameras on a NASA Gulfstream III monitoring the experiment.

The experiment also included researchers and equipment from Dartmouth University, University of New Hampshire, Clemson University, University of Maryland and NASA’s Goddard Space Flight Center.

Preparation began in 2018, when NASA approved the project.

Researchers Establish the First Entanglement-Based Quantum Network
Researchers Establish the First Entanglement-Based Quantum Network
Three-Node Quantum Network

Artist’s impression of the three-node quantum network. Credit: Matteo Pompili for QuTech

A team of researchers from QuTech in the Netherlands reports realization of the first multi-node quantum network, connecting three quantum processors. In addition, they achieved a proof-of-principle demonstration of key quantum network protocols. Their findings mark an important milestone towards the future quantum internet and have now been published in Science.

The quantum internet

The power of the Internet is that it allows any two computers on Earth to be connected with each other, enabling applications undreamt of at the time of its creation decades ago. Today, researchers in many labs around the world are working towards first versions of a quantum internet — a network that can connect any two quantum devices, such as quantum computers or sensors, over large distances. Whereas today’s Internet distributes information in bits (that can be either 0 or 1), a future quantum internet will make use of quantum bits that can be 0 and 1 at the same time. “A quantum internet will open up a range of novel applications, from unhackable communication and cloud computing with complete user privacy to high-precision time-keeping,” says Matteo Pompili, PhD student and a member of the research team. “And like with the Internet 40 years ago, there are probably many applications we cannot foresee right now.”

Working on a Quantum Node

Researchers work on one of the quantum network nodes, where mirrors and filters guide the laser beams to the diamond chip. Credit: Marieke de Lorijn for QuTech

Towards ubiquitous connectivity

The first steps towards a quantum internet were taken in the past decade by linking two quantum devices that shared a direct physical link. However, being able to pass on quantum information through intermediate nodes (analogous to routers in the classical internet) is essential for creating a scalable quantum network. In addition, many promising quantum internet applications rely on entangled quantum bits, to be distributed between multiple nodes. Entanglement is a phenomenon observed at the quantum scale, fundamentally connecting particles at small and even at large distances. It provides quantum computers their enormous computational power and it is the fundamental resource for sharing quantum information over the future quantum internet. By realizing their quantum network in the lab, a team of researchers at QuTech — a collaboration between Delft University of Technology and TNO — is the first to have connected two quantum processors through an intermediate node and to have established shared entanglement between multiple stand-alone quantum processors.

An animation explaining the world’s first rudimentary quantum network. Credit: Slimplot for QuTech

Operating the quantum network

The rudimentary quantum network consists of three quantum nodes, at some distance within the same building. To make these nodes operate as a true network, the researchers had to invent a novel architecture that enables scaling beyond a single link. The middle node (called Bob) has a physical connection to both outer nodes (called Alice and Charlie), allowing entanglement links with each of these nodes to be established. Bob is equipped with an additional quantum bit that can be used as memory, allowing a previously generated quantum link to be stored while a new link is being established. After establishing the quantum links Alice-Bob and Bob-Charlie, a set of quantum operations at Bob converts these links into a quantum link Alice-Charlie. Alternatively, by performing a different set of quantum operations at Bob, entanglement between all three nodes is established.

Matteo Pompili and Sophie Hermans

Coauthors Matteo Pompili (left) and Sophie Hermans (right), both PhD student in the group of Ronald Hanson, at one of the quantum network nodes. Credit: Marieke de Lorijn for QuTech

Ready for subsequent use

An important feature of the network is that it announces the successful completion of these (intrinsically probabilistic) protocols with a “flag” signal. Such heralding is crucial for scalability, as in a future quantum internet many of such protocols will need to be concatenated. “Once established, we were able to preserve the resulting entangled states, protecting them from noise,” says Sophie Hermans, another member of the team. “It means that, in principle, we can use these states for quantum key distribution, a quantum computation or any other subsequent quantum protocol.”

Quantum Internet Demonstrator

This first entanglement-based quantum network provides the researchers with a unique testbed for developing and testing quantum internet hardware, software and protocols. “The future quantum internet will consist of countless quantum devices and intermediate nodes,” says Ronald Hanson, who led the research team. “Colleagues at QuTech are already looking into future compatibility with existing data infrastructures.” In due time, the current proof-of-principle approach will be tested outside the lab on existing telecom fiber — on QuTech’s Quantum Internet Demonstrator, of which the first metropolitan link is scheduled to be completed in 2022.

Higher-level layers

In the lab, the researchers will focus on adding more quantum bits to their three-node network and on adding higher level software and hardware layers. Pompili: “Once all the high-level control and interface layers for running the network have been developed, anybody will be able to write and run a network application without needing to understand how lasers and cryostats work. That is the end goal.”

Reference: “Realization of a multinode quantum network of remote solid-state qubits” by M. Pompili, S. L. N. Hermans, S. Baier, H. K. C. Beukers, P. C. Humphreys, R. N. Schouten, R. F. L. Vermeulen, M. J. Tiggelman, L. dos Santos Martins, B. Dirkse, S. Wehner and R. Hanson, 16 Apr 2021, Science.
DOI: 10.1126/science.abg1919

Global Study Finds Vast Under-Treatment of Diabetes
Global Study Finds Vast Under-Treatment of Diabetes

Diabetes Treatments

Only 1 in 10 people with diabetes in low- and middle-income countries is getting evidence-based, low-cost comprehensive care.

Nearly half a billion people on the planet have diabetes, but most of them aren’t getting the kind of care that could make their lives healthier, longer and more productive, according to a new global study of data from people with the condition.

Many don’t even know they have the condition.

Only 1 in 10 people with diabetes in the 55 low- and middle-income countries studied receive the type of comprehensive care that’s been proven to reduce diabetes-related problems, according to the new findings published in Lancet Healthy Longevity.

That comprehensive package of care — low-cost medicines to reduce blood sugar, blood pressure and cholesterol levels; and counseling on diet, exercise and weight — can help lower the health risks of under-treated diabetes. Those risks include future heart attacks, strokes, nerve damage, blindness, amputations and other disabling or fatal conditions.

The new study, led by physicians at the University of Michigan and Brigham and Women’s Hospital with a global team of partners, draws on data from standardized household studies, to allow for apples-to-apples comparisons between countries and regions.

The authors analyzed data from surveys, examinations and tests of more than 680,000 people between the ages of 25 and 64 worldwide conducted in recent years. More than 37,000 of them had diabetes; more than half of them hadn’t been formally diagnosed yet, but had a key biomarker of elevated blood sugar.

The researchers have provided their findings to the World Health Organization, which is developing efforts to scale up delivery of evidence-based diabetes care globally as part of an initiative known as the Global Diabetes Compact. The forms of diabetes-related care used in the study are all included in the 2020 WHO Package of Essential Noncommunicable Disease Interventions.

“Diabetes continues to explode everywhere, in every country, and 80% of people with it live in these low- and middle-income countries,” says David Flood, M.D., M.Sc., lead author and a National Clinician Scholar at the U-M Institute for Healthcare Policy and Innovation. “It confers a high risk of complications such as including heart attacks, blindness, and strokes. We can prevent these complications with comprehensive diabetes treatment, and we need to make sure people around the world can access treatment.”

Flood worked with senior author Jennifer Manne-Goehler, M.D., Sc.D., of Brigham and Women’s Hospital and the Medical Practice Evaluation Center at Massachusetts General Hospital, to lead the analysis of detailed global data.

Key findings

In addition to the main finding that 90% of the people with diabetes studied weren’t getting access to all six components of effective diabetes care, the study also finds major gaps in specific care.

For instance, while about half of all people with diabetes were taking a drug to lower their blood sugar, and 41% were taking a drug to lower their blood pressure, only 6.3% were receiving cholesterol-lowering medications.

These findings show the need to scale-up proven treatment not only to lower glucose but also to address cardiovascular disease risk factors, such as hypertension and high cholesterol, in people with diabetes.

Less than a third had access to counseling on diet and exercise, which can help guide people with diabetes to adopt habits that can control their health risks further.

Even when the authors focused on the people who had already received a formal diagnosis of diabetes, they found that 85% were taking a medicine to lower blood sugar, 57% were taking a blood pressure medication, but only 9% were taking something to control their cholesterol. Nearly 74% had received diet-related counseling, and just under 66% had received exercise and weight counseling.

Taken together, less than one in five people with previously diagnosed diabetes were getting the full package of evidence-based care.

Relationship to national income and personal characteristics

In general, the study finds that people were less likely to get evidence-based diabetes care the lower the average income of the country and region they lived in. That’s based on a model that the authors created using economic and demographic data about the countries that were included in the study.

The nations in the Oceania region of the Pacific had the highest prevalence of diabetes — both diagnosed and undiagnosed — but the lowest rates of diabetes-related care.

But there were exceptions where low-income countries had higher-than-expected rates of good diabetes care, says Flood, citing the example of Costa Rica. And in general, the Latin America and Caribbean region was second only to Oceania in diabetes prevalence, but had much higher levels of care.

Focusing on what countries with outsize achievements in diabetes care are doing well could provide valuable insights for improving care elsewhere, the authors say. That even includes informing care in high-income countries like the United States, which does not consistently deliver evidence-based care to people with diabetes.

The study also shines a light on the variation between countries and regions in the percentage of cases of diabetes that have been diagnosed. Improve reliable access to diabetes diagnostic technologies is important in leading more people to obtain preventive care and counseling.

Women, people with higher levels of education and higher personal wealth, and people who are older or had high body mass index were more likely to be receiving evidence-based diabetes care. Diabetes in people with “normal” BMI is not uncommon in low- and middle-income countries, suggesting more need to focus on these individuals, the authors say.

The fact that diabetes-related medications are available at very low cost, and that individuals can reduce their risk through lifestyle changes, mean that cost should not be a major barrier, says Flood. In fact, studies have shown the medications to be cost-effective, meaning that the cost of their early and consistent use is outweighed by the savings on other types of care later.

Reference: 21 May 2021, Lancet Healthy Longevity.
DOI: 10.1016/S2666-7568(21)00089-1

In addition to Flood, who is a clinical lecturer in hospital medicine at Michigan Medicine, U-M’s academic medical center, the study team includes two others from U-M: Michele Heisler, M.D., M.P.A., a professor of internal medicine and member of IHPI, and Matthew Dunn, a student at the U-M School of Public Health. The study was funded by the National Clinician Scholars Program at IHPI, and by the National Institute of Diabetes and Digestive and Kidney Diseases, Harvard Catalyst, and the National Center for Advancing Translational Sciences.

Boosting Energy Production at US Wind Plants With Wake Steering
Boosting Energy Production at US Wind Plants With Wake Steering

Wind Farm

U.S. wind plants achieve a predicted annual production gain of 0.8% by using wake steering.

Wake steering is a strategy employed at wind power plants involving misaligning upstream turbines with the wind direction to deflect wakes away from downstream turbines, which consequently increases the net production of wind power at a plant.

In Journal of Renewable and Sustainable Energy, by AIP Publishing, researchers at the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) illustrate how wake steering can increase energy production for a large sampling of commercial land-based U.S. wind power plants.

While some plants showed less potential for wake steering due to unfavorable meteorological conditions or turbine layout, several wind power plants were ideal candidates that could benefit greatly from wake steering control.

Overall, a predicted average annual production gain of 0.8% was found for the set of wind plants investigated. In addition, the researchers found that on wind plants using wake steering, wind turbines could be placed more closely together, increasing the amount of power produced in a given area by nearly 70% while maintaining the same cost of energy generation.

Wind Turbine Wake Steering

Illustration of wake steering for an example wind plant. Wind turbines are misaligned with the wind to redirect their wakes away from downstream turbines. Blue regions indicate lower wind speeds caused by wakes. Credit: Eric Simley, NREL

“We were surprised to see that that there was still a large amount of variability in the potential energy improvement from wake steering, even after accounting for the wake losses of different wind plants,” said author David Bensason.

Just as umbrellas may cast a shadow, wind turbines create a region of slower, more turbulent air flow downstream of their rotor, which is known as a wake. When these wakes flow into another turbine, they reduce its power production capacity.

The wake steering strategy “steers” these wakes away from turbines by offsetting the angle between the rotor face and the incoming wind direction. This technique sacrifices the power efficiencies of a few turbines in order to increase the performance of the wind power plant as a whole. Wake steering can only increase energy production if there are wake losses to start. Consequently, the benefits of wake steering tend to increase for wind plants with higher wake losses.

The study is one of the first to use the Gauss-Curl-Hybrid wake model, which NREL developed. This model predicts wake behavior in a wind plant more accurately than prior models and captures effects that are more prominent in large-scale plants. The researchers also combined several publicly available databases and tools that together make the investigation of wake steering potential for a large sample of U.S. wind plants possible.

“We hope that this study, which highlighted the potential for wake steering for a large sample of existing commercial wind plants in the U.S., motivates wind plant owners to implement wake steering in their wind plants to increase energy production and contribute to making wind energy a widely deployed affordable clean energy source,” said co-author Eric Simley.

Reference: “Evaluation of the potential for wake steering for U.S. land-based wind power plants” is authored by D. Bensason, E. Simley, O. Roberts, P. Fleming, M. Debnath, J. King, C. Bay and R. Mudafort, 18 May 2021, Journal of Renewable and Sustainable Energy.
DOI: 10.1063/5.0039325

New and Effective Treatment Discovered for Vitamin D Deficiency
New and Effective Treatment Discovered for Vitamin D Deficiency

Doctor Holding Vitamin Capsule

Will aid patients with fat malabsorption issues including gastric bypass surgery, obese adults.

There are several million people worldwide with various fat malabsorption syndromes including those who have undergone gastric bypass surgery and those with obesity. These patients often have a difficult time absorbing vitamin D and both groups of patients are at an increased risk for vitamin D deficiency and therefore at higher risk for osteoporosis and osteomalacia (softening of the bones). Patients with obesity are also susceptible to vitamin D deficiency as vitamin D derived from intestinal absorption and cutaneous synthesis is diluted in a larger body pool of fat. Now a new study demonstrates 25-hydroxyvitamin D3 is an effective treatment for vitamin D deficiency for these specific patients.

According to the researchers, approximately one-third of adults are obese and require much larger doses of vitamin D to satisfy their requirement. “This vitamin D metabolite is better absorbed in patients with fat malabsorption syndromes and since it is not as fat-soluble, it does not gets diluted in the body fat and is effective in raising and maintaining blood levels of 25-hydroxyvitamin D in obese people,” explained corresponding author Michael F. Holick, PhD, MD, professor of medicine, physiology and biophysics and molecular medicine at Boston University School of Medicine.

Healthy adults, adults with a fat malabsorption syndrome, and obese adults were compared to evaluate if a more water-soluble form of vitamin D3 (25-hydroxyvitamin D3) was more effective than the same dose of vitamin D3 in improving their vitamin D status. The researchers observed that compared to healthy adults only about 36 percent of orally ingested vitamin D3 was found in the blood of patients with fat malabsorption syndromes including patients who had gastric bypass surgery. When the same adults ingested 25-hydroxyvitamin D3 the patients with fat malabsorption syndromes were able to absorb it as well as the healthy adults thereby raising their vitamin D status to the same degree. A similar observation was made in the obese subjects compared to the healthy controls. “Therefore using 25-hydroxyvitamin D3 could be a novel approach for treating vitamin D deficiency in patients with fat malabsorption syndromes and obese adults,” added Holick.

Vitamin D deficiency not only results in bone loss increasing risk for fracture but causes the painful bone disease osteomalacia. Patients who are vitamin D deficient with osteomalacia have unrelenting achiness in their bones and muscles. Vitamin D deficiency has been associated with an increased risk of many chronic illnesses including multiple sclerosis, type 1 diabetes, heart disease, type 2 diabetes, depression, neurocognitive dysfunction, and Alzheimer’s disease as well as infectious diseases including COVID.

Reference: “A pilot-randomized, double-blind crossover trial to evaluate the pharmacokinetics of orally administered 25-hydroxyvitamin D3 and vitamin D3 in healthy adults with differing BMI and in adults with intestinal malabsorption” by Nipith Charoenngam, Tyler A Kalajian, Arash Shirvani, Grace H Yoon, Suveer Desai, Ashley McCarthy, Caroline M Apovian and Michael F Holick, 19 May 2021, American Journal of Clinical Nutrition.
DOI: 10.1093/ajcn/nqab123

This trial was funded in part by a grant from Carbogen Amcis BV and institutional resources. NC received the institutional research training grant from the Ruth L Kirchstein National Research Service Award program from the National Institutes of Health (2 T32 DK 7201-42). CMA is supported by P30 DK046200. Carbogen Amcis BV, Netherlands, provided assistance and the supply of vitamin D3 and 25-hydroxyvitamin D3.

Invasive Alien Species Cost Africa's Agricultural Sector a Staggering .6 Trillion a Year
Invasive Alien Species Cost Africa’s Agricultural Sector a Staggering $3.6 Trillion a Year
Fall Armyworm

Fall armyworm costs USD$9.4 bn a year in yield losses to African agriculture. Credit: CABI

CABI scientists have conducted the first comprehensive study on the economic impact of a range of Invasive Alien Species (IAS) on Africa’s agricultural sector, which they estimated to be USD $3.6 trillion a year.

This is equivalent to 1.5 times the Gross Domestic Product (GDP) of all African countries combined — or similar to that of Germany.

The average annual cost of IAS per country was USD $76.32 billion. Full details of the cost for individual countries are outlined in the paper published in the journal CABI Agriculture and Bioscience.

The team, comprising scientists from CABI centers in Africa and Europe, conducted a thorough literature review and online survey of 110 respondents — largely working in government or research — and established that Tuta (Phthorimaea) absoluta caused the highest annual yield losses at USD $11.45 billion, followed by the fall armyworm (Spodoptera frugiperda) at USD $9.4 billion.

The research took account of yield losses of major crops including maize, tomato, cassava, mango, and banana (USD $82.2 billion), as well as labor costs — through weeding (USD $3.63 trillion) — and loss of income derived from livestock (USD $173 million).

The annual impact of IAS — that also included Prostephanus truncatus, Bactrocera dorsalis, and Banana bunchy top virus (BBTV) — was highest on cassava (USD $21.8 billion), followed by citrus fruits (USD $14.6 billion), tomato (USD $10.1 billion), maize (USD $9.8 billion) and banana (USD $7.1 billion).

Lead author Dr. Rene Eschen said, “This study reveals the extent and scale of the economic impacts of Invasive Alien Species in the agricultural sector in one of the least studied continents.

“The results highlight the need for measures that prevent new species from arriving and established species from spreading, and that reduce management costs for widely present and impactful species through methods such as biocontrol. This will potentially reduce future production costs, lower yield losses and improve the livelihoods of farmers and other affected land users.”

Co-author, Dr. Bryony Taylor, said, “We have added to the knowledge base of the costs of Invasive Alien Species to Africa’s agricultural sector by including all countries within the continent where previous research only included a few.

“We also include the cost of reduced livestock-derived income and research and labor costs, which are generally not included in estimates of the costs of Invasive Alien Species.

“The results of this study provide policymakers with the evidence needed to enable prioritization of management measures for IAS, thereby reducing costs in the long term.”

Fernadis Makale, another co-author on the paper, said, “The large estimate for the weeding costs may come as a surprise but this work, often carried out by women and children, is never measured as part of the African economy.

“Moreover, it should not be concluded that people are being paid that amount as salaries. Rather, the estimate represents an opportunity cost, meaning that if people didn’t need to weed IAS could do something else, such as going to school or undertaking an income-generating economic activity.

“In addition, our study provides evidence of the need for country and regional quarantine and phytosanitary measures to prevent the entry and spread of new IAS, preventing additional, potentially huge costs as new IAS spread across the continent.”

The study comes after a policy summit on Invasive Species held in 2019, where 70 delegates, representing policymakers, research, the private sector and civil society from across Africa, resolved to develop a strategy and action plan to fight against Invasive Alien Species.

In response to the findings, Dr. Dennis Rangi, Director General, Development, CABI, said, “An estimated USD $3.6 trillion a year impact of Invasive Alien Species on Africa’s agricultural sector is a tremendous loss where over 80% of people living in rural areas rely on the crops they grow for food and income.

“The long-term effects are exacerbated by COVID-19 which continues to apply intense pressure on an already fragile agricultural sector and food supply chain. Notably, Governments across the continent put in place mitigation measures to manage the pandemic and its impact. Kenya for example proposed a USD $503 million economic stimulus package in 2020 to cushion its citizens.

“This is the same resolve, urgency, and investment our Governments need to channel towards managing the Invasive Alien Species problem.

“Under the African Union’s stewardship, countries now have a Strategy for Managing Invasive Species in Africa. The 2021-2030 strategy provides a framework for all relevant stakeholders at the Continental, Regional and National level can use to sustainably prevent and eradicate invasive species in Africa.”

H.E Ambassador Madam Josefa Sacko, Commissioner for Agriculture, Rural Development, Blue Economy and Sustainable Environment of the African Union Commission, said, “From this research, it is clear that Invasive Alien Species pose a devastating impact on Africa’s agricultural sector with a direct consequence on the achievement of the four commitments listed in the Malabo declaration. We cannot transform African agriculture if we do not pay special attention to the management and control of Invasive Alien Species. It’s time to act and walk the talk.

“The AU Commission (AUC) is providing a coordination mechanism for the implementation of the strategy for managing Invasive Alien Species at the continental level. This also includes providing strategic guidance, facilitating domestication and implementation of the strategy, plus seeking support from partners across the continent.

“Managing Invasive Alien Species is an absolute imperative if Africa’s agriculture is to meet its full potential and feed its growing population — which is expected to double to 2.5 bn people by 2050 — and contribute towards global food security,” added Madam Sacko.

Currently in its first year of implementation, the Invasive Alien Species strategy outlines six key areas of focus under its 2021-2025 action plan. One of the priority areas is the establishment in 2022 of continental, regional, and national emergency funding mechanisms for rapid action against Invasive Alien Species.

Reference: “Towards estimating the economic cost of invasive alien species to African crop and livestock production” by Rene; Eschen, Tim Beale, J. Miguel Bonnin, Kate L. Constantine, Solomon Duah, Elizabeth A. Finch, Fernadis Makale, Winnie Nunda, Adewale Ogunmodede, Corin F. Pratt, Emma Thompson, Frances Williams, Arne Witt, Bryony Taylor, 20 May 2021, CABI Agriculture and Bioscience.
DOI: 10.1186/s43170-021-00038-7

The research was financially supported by the Foreign, Commonwealth & Development Office (FCDO), UK, and the Directorate General for International Cooperation (DGIS), Netherlands, through CABI’s Action on Invasives program. CABI is an international intergovernmental organization, and we gratefully acknowledge the core financial support from our member countries (and lead agencies) including the United Kingdom (Foreign, Commonwealth & Development Office), China (Chinese Ministry of Agriculture and Rural Affairs), Australia (Australian Centre for International Agricultural Research), Canada (Agriculture and Agri-Food Canada), Netherlands (Directorate-General for International Cooperation), and Switzerland (Swiss Agency for Development and Cooperation).

Almost 1 Million Extra Deaths Related to COVID-19 Pandemic in 29 High Income Countries in 2020
Almost 1 Million Extra Deaths Related to COVID-19 Pandemic in 29 High Income Countries in 2020

COVID Death Artist Concept

Including 94,400 more deaths than expected in the UK alone.

Almost 1 million extra deaths relating to the covid-19 pandemic occurred in 29 high income countries in 2020, finds a study published by The BMJ today.

Except for Norway, Denmark and New Zealand, all other countries examined had more deaths than expected in 2020, particularly in men. The five countries with the highest absolute number of excess deaths were the US, UK, Italy, Spain, and Poland.

Measuring excess deaths — the number of deaths above that expected during a given time period — is a way of assessing the impact of the pandemic on deaths in different populations. However, previous studies have not accounted for temporal and seasonal trends and differences in age and sex across countries.

To address this, a team of international researchers, led by Dr. Nazrul Islam from the Nuffield Department of Population Health, University of Oxford, set out to estimate the direct and indirect effects of the covid-19 pandemic on mortality in 2020 in 29 high-income countries.

Using a mathematical model, they calculated weekly excess deaths in 2020 for each country, accounting for age and sex differences between countries, and also for seasonal and yearly trends in mortality over the five preceding years.

Overall an estimated 979,000 total excess deaths occurred in 2020 in the 29 countries analyzed. All countries experienced excess deaths in 2020, except New Zealand, Norway, and Denmark.

The five countries with the highest absolute number of excess deaths were the US (458,000), the UK (94,400), Italy (89,100), Spain (84,100), and Poland (60,100). New Zealand had lower overall deaths than expected (?2,500).

The total number of excess deaths was largely concentrated among people aged 75 or older, followed by people aged 65-74, while deaths in children under 15 were similar to expected levels in most countries and lower than expected in some countries.

In most countries, the estimated number of excess deaths exceeded the number of reported deaths from covid-19. For example, in both the US and the UK, estimated excess deaths were more than 30% higher than the number of reported covid-19 deaths.

However, other countries such as Israel and France had a higher number of reported covid-19 deaths than estimated excess deaths. The cause of this variation is unclear, but may result from access to testing and differences in how countries define and record covid-19 deaths.

In most countries, age-specific excess death rates were higher in men than in women, and the absolute difference in rates between the sexes tended to increase with age. However, in the US the excess death rate was higher among women than men in those aged 85 years or older.

The researchers point to some study limitations, including a lack of data from lower and middle income countries and on factors such as ethnicity and socioeconomic status, and they acknowledge that many indirect effects of a pandemic may need a longer timeframe to have a measurable effect on mortality.

Nevertheless, this was a large study using detailed age and sex-specific mortality data with robust analytical methods, and as such “adds important insights on the direct and indirect effects of the covid-19 pandemic on total mortality,” they say.

“Reliable and timely monitoring of excess deaths would help to inform public health policy in investigating the sources of excess mortality in populations and would help to detect important social inequalities in the impact of the pandemic to inform more targeted interventions,” they add.

Future work will also be needed to understand the impact of national vaccination programs on mortality in 2021, they conclude.

These findings confirm the huge toll of the covid-19 pandemic on mortality in high-income countries in 2020, say researchers at Imperial College London in a linked editorial.

But they warn that its full impact may not be apparent for many years, particularly in lower income countries where factors such as poverty, lack of vaccines, weak health systems, and high population density place people at increased risk from covid-19 and related harm.

And they point out that while mortality is a useful metric, policy informed by deaths alone overlooks what may become a huge burden of long-term morbidity resulting from covid-19.

“There is an urgent need to measure this excess morbidity, support those with long-term complications of covid-19, and fund health systems globally to address the backlog of work resulting from the pandemic,” they conclude.

References:

“Excess deaths associated with covid-19 pandemic in 2020: age and sex disaggregated time series analysis in 29 high income countries” 20 May 2021, The BMJ.
DOI: 10.1136/bmj.n1137

“Measuring the impact of covid-19” 20 May 2021, The BMJ.
DOI: 10.1136/bmj.n1239

National Border Walls and Fences Threaten Wildlife As Climate Changes
National Border Walls and Fences Threaten Wildlife As Climate Changes
Jaguar

The USA-Mexico border wall could restrict the movement of jaguars as they shift between countries to find more hospitable places to live due to climate change. Credit: Stephen Willis/Durham University

Walls and fences designed to secure national borders could make it difficult for almost 700 mammal species to adapt to climate change, according to new research.

The study led by Durham University, UK, is the first to look at how man-made barriers could restrict the movement of animals as they shift between countries to find more hospitable places to live.

The researchers identified 32,000km of borders that are fortified with fences and walls, which have the potential to stop large numbers of animals from moving to more suitable environments.

Of these barriers, the USA-Mexico border wall, fences along the border between China and Russia, and fencing being constructed along the India-Myanmar border might be the most ecologically damaging, they said.

The USA-Mexico border wall alone could obstruct the movement of 122 mammal species displaced by climate change, the authors have calculated.

USA Mexico Border

Border barriers, such as this along the USA-Mexico border, present an obstacle for many species whose ranges are shifting under climate change. Credit: Steve Hillebrand/USFWS

Mammals that could be obstructed by man-made borders across the world include leopards, tigers, the critically endangered Saiga antelope, cheetah, and jaguarundi (See Additional Information for more potentially affected species).

As well as considering political borders, the researchers also compared the likely impacts of ongoing climate change on species within countries.

They found that biodiversity loss is likely to be most severe in countries that are less responsible for the emissions that are driving climate change.

The findings are published in the journal Proceedings of the National Academy of Sciences, USA.

The researchers say that a third of mammals and birds will need to find suitable habitats in other countries by 2070 due to climate change, with this movement most likely to happen between the Amazon rainforest and tropical Andes, around the Himalayas, and in parts of Central and Eastern Africa.

They are calling for more cross-border conservation initiatives and habitat corridors to reduce the problem.

They have also urged world leaders to reduce the risk to biodiversity by committing to ambitious reductions in greenhouse gases when they meet at the UN Climate Change Conference in Glasgow this November.

Leopards

Leopards are one of nearly 700 species that may be unable to move into new countries because of border barriers as the climate changes. Credit: Stephen Willis/Durham University

Joint-study lead Professor Stephen Willis, in Durham University’s Department of Biosciences, said: “Species all over the planet are on the move as they respond to a changing climate. Our findings show how important it is that species can move across national boundaries through connected habitats in order to cope with this change.

“Borders that are fortified with walls and fences pose a serious threat to any species that can’t get across them.

“If we’re serious about protecting nature, expanding transboundary conservation initiatives and reducing the impacts of border barriers on species will be really important – although there’s no substitute for tackling the greenhouse gas emissions at the root of the issue.”

In total the researchers looked at the effect of climate change on the movement of 12,700 mammal and bird species whose habitats could be affected by rising global temperatures, forcing them to find new homes.

They found that the loss of bird and mammal species was projected to be greater in poorer countries with lower CO2 emissions which would be impacted more significantly by global climate change.

Joint-study lead Mark Titley, a PhD researcher in Durham University’s Department of Biosciences, said: “The stark inequities between those who contributed most to climate change and those who will be most impacted raise really important questions of international justice.

“Fortunately, our models also show how strong and urgent emissions reductions, in line with the Paris Agreement, could greatly reduce the impacts on biodiversity and relieve the burden of such losses on less wealthy nations.

“World leaders must seize the opportunity at November’s COP 26 climate conference in Glasgow to ramp up ambitious pledges to cut emissions, or risk enormous harm to the natural world and our societies that depend on it.”

Reference: “Global inequities and political borders challenge nature conservation under climate change” by Mark A. Titley, Stuart H. M. Butchart, Victoria R. Jones, Mark J. Whittingham and Stephen G. Willis, 8 February 2021, Proceedings of the National Academy of Sciences.
DOI: 10.1073/pnas.2011204118

The research was funded by the Natural Environment Research Council, part of UK Research and Innovation, and carried out in collaboration with BirdLife International.

Additional information

Species that could be affected by the USA-Mexico border wall:

  • Mexican wolf
  • Jaguar
  • Jaguarundi
  • White-lipped peccary
  • Margay
  • Common opossum
  • Greater grison
  • Southern spotted skunk

Species that could be affected by the India-Myanmar border:

  • Sloth bear
  • Indian pangolin
  • Banteng
  • Large spotted civet
  • Himalayan goral
  • Gongshan muntjac
  • Indian grey mongoose
  • Burmese hare

Species that could be affected by the China-Russia border:

  • Tibetan antelope
  • Chinese goral
  • Goa
  • Goitered gazelle
  • Tibetan fox
  • Desert hare
  • Korean hare
  • Hog badger
New Technology Makes Cancer Tumors Eliminate Themselves
New Technology Makes Cancer Tumors Eliminate Themselves
Tumor From the Inside

A piece of the tumor was made completely transparent and scanned in 3D with a special microscope. The components labeled with fluorescent colors were rendered in a rotatable 3D representation on the computer (red: blood vessels, turquoise: tumor cells, yellow: therapeutic antibody). Credit: Plückthun Lab

A new technology developed by University of Zurich researchers enables the body to produce therapeutic agents on demand at the exact location where they are needed. The innovation could reduce the side effects of cancer therapy and may hold the solution to better delivery of Covid-related therapies directly to the lungs.

Scientists at the University of Zurich have modified a common respiratory virus, called adenovirus, to act like a Trojan horse to deliver genes for cancer therapeutics directly into tumor cells. Unlike chemotherapy or radiotherapy, this approach does no harm to normal healthy cells. Once inside tumor cells, the delivered genes serve as a blueprint for therapeutic antibodies, cytokines and other signaling substances, which are produced by the cancer cells themselves and act to eliminate tumors from the inside out.

Sneaking adenoviruses past the immune system undetected

“We trick the tumor into eliminating itself through the production of anti-cancer agents by its own cells,” says postdoctoral fellow Sheena Smith, who led the development of the delivery approach. Research group leader Andreas Plueckthun explains: “The therapeutic agents, such as therapeutic antibodies or signaling substances, mostly stay at the place in the body where they’re needed instead of spreading throughout the bloodstream where they can damage healthy organs and tissues.”

The UZH researchers call their technology SHREAD: for SHielded, REtargetted ADenovirus. It builds on key technologies previously engineered by the Plueckthun team, including to direct adenoviruses to specified parts of the body to hide them from the immune system.

High amount of drugs in the tumor, low concentration in other tissues

With the SHREAD system, the scientists made the tumor itself produce a clinically approved breast cancer antibody, called trastuzumab, in the mammary of a mouse. They found that, after a few days, SHREAD produced more of the antibody in the tumor than when the drug was injected directly. Moreover, the concentration in the bloodstream and in other tissues where side effects could occur were significantly lower with SHREAD. The scientists used a very sophisticated, high-resolution 3D imaging method and tissues rendered totally transparent to show how the therapeutic antibody, produced in the body, creates pores in blood vessels of the tumor and destroys tumor cells, and thus treats it from the inside.

Use to combat Covid-19 being investigated

Plueckthun, Smith and colleagues emphasize that SHREAD is applicable not only for the fight against breast cancer. As healthy tissues no longer come into contact with significant levels of the therapeutic agent, it is also applicable for delivery of a wide range of so-called biologics – powerful protein-based drugs that would otherwise be too toxic.

In fact, members of the Plueckthun group are currently applying their technology in a project aimed as a therapy for Covid-19. Adenoviral vectors are already being used in several of the COVID vaccines, including the Johnson & Johnson, AstraZeneca, China’s CanSino Biologics and Russia’s Sputnik V vaccines – but without the innovative SHREAD technology. “By delivering the SHREAD treatment to patients via an inhaled aerosol, our approach could allow targeted production of Covid antibody therapies in lung cells, where they are needed most,” Smith explains. “This would reduce costs, increase accessibility of Covid therapies and also improve vaccine delivery with the inhalation approach.”

Reference: “The SHREAD gene therapy platform for paracrine delivery improves tumor localization and intratumoral effects of a clinical antibody” by Sheena N. Smith, Rajib Schubert, Branko Simic, Dominik Brücher, Markus Schmid, Niels Kirk, Patrick C. Freitag, Viviana Gradinaru and Andreas Plückthun, 17 May 2021, Proceedings of the National Academy of Sciences.
DOI: 10.1073/pnas.2017925118

Funding: Swiss National Science Foundation, NIH/National Cancer Institute

AI Uses Timing and Weather Data to Accurately Predict Cardiac Arrest Risk
AI Uses Timing and Weather Data to Accurately Predict Cardiac Arrest Risk

AI Heart Concept

Machine learning model combines timing and weather data.

A branch of artificial intelligence (AI), called machine learning, can accurately predict the risk of an out of hospital cardiac arrest — when the heart suddenly stops beating — using a combination of timing and weather data, finds research published online in the journal Heart.

Machine learning is the study of computer algorithms, and based on the idea that systems can learn from data and identify patterns to inform decisions with minimal intervention.

The risk of a cardiac arrest was highest on Sundays, Mondays, public holidays and when temperatures dropped sharply within or between days, the findings show.

This information could be used as an early warning system for citizens, to lower their risk and improve their chances of survival, and to improve the preparedness of emergency medical services, suggest the researchers.

Out of hospital cardiac arrest is common around the world, but is generally associated with low rates of survival. Risk is affected by prevailing weather conditions.

But meteorological data are extensive and complex, and machine learning has the potential to pick up associations not identified by conventional one-dimensional statistical approaches, say the Japanese researchers.

To explore this further, they assessed the capacity of machine learning to predict daily out-of-hospital cardiac arrest, using daily weather (temperature, relative humidity, rainfall, snowfall, cloud cover, wind speed, and atmospheric pressure readings) and timing (year, season, day of the week, hour of the day, and public holidays) data.

Of 1,299,784 cases occurring between 2005 and 2013, machine learning was applied to 525,374, using either weather or timing data, or both (training dataset).

The results were then compared with 135,678 cases occurring in 2014-15 to test the accuracy of the model for predicting the number of daily cardiac arrests in other years (testing dataset).

And to see how accurate the approach might be at the local level, the researchers carried out a ‘heatmap analysis,’ using another dataset drawn from the location of out-of-hospital cardiac arrests in Kobe city between January 2016 and December 2018.

The combination of weather and timing data most accurately predicted an out-of-hospital cardiac arrest in both the training and testing datasets.

It predicted that Sundays, Mondays, public holidays, winter, low temperatures, and sharp temperature drops within and between days were more strongly associated with cardiac arrest than either the weather or timing data alone.

The researchers acknowledge that they didn’t have detailed information on the location of cardiac arrests except in Kobe city, nor did they have any data on pre-existing medical conditions, both of which may have influenced the results.

But they suggest: “Our predictive model for daily incidence of [out of hospital cardiac arrest] is widely generalizable for the general population in developed countries, because this study had a large sample size and used comprehensive meteorological data.”

They add: “The methods developed in this study serve as an example of a new model for predictive analytics that could be applied to other clinical outcomes of interest related to life-threatening acute cardiovascular disease.”

And they conclude: “This predictive model may be useful for preventing [out of hospital cardiac arrest] and improving the prognosis of patients…via a warning system for citizens and [emergency medical services] on high-risk days in the future.”

In a linked editorial, Dr. David Foster Gaieski, of Sidney Kimmel Medical College at Thomas Jefferson University, agrees.

“Knowing what the weather will most likely be in the coming week can generate ‘cardiovascular emergency warnings’ for people at risk — notifying the elderly and others about upcoming periods of increased danger similar to how weather data are used to notify people of upcoming hazardous road conditions during winter storms,” he explains.

“These predictions can be used for resource deployment, scheduling, and planning so that emergency medical services systems, emergency department resuscitation resources, and cardiac catheterization laboratory staff are aware of, and prepared for, the number of expected [cases] during the coming days,” he adds.

References:

“Machine learning model for predicting out-of-hospital cardiac arrests using meteorological and chronological data” 17 May 2021, Heart.
DOI: 10.1136/heartjnl-2020-318726

“Next week’s weather forecast: cloudy, cold, with a chance of cardiac arrest” 17 May 2021, Heart.
DOI: 10.1136/heartjnl-2021-318950

Funding: Environmental Restoration and Conservation Agency of Japan; Japan Society for the Promotion of Science; Intramural Research Fund of Cardiovascular Disease of the National Cerebral and Cardiovascular Centre

Harvesting Light Like Nature Does: Synthesizing a New Class of Bio-Inspired, Light-Capturing Nanomaterials
Harvesting Light Like Nature Does: Synthesizing a New Class of Bio-Inspired, Light-Capturing Nanomaterials
POSS Peptoid Molecules

POSS-peptoid molecules self-assemble into rhomboid-shaped nanocrystals. Credit: Illustration by Stephanie King | Pacific Northwest National Laboratory

Inspired by nature, researchers at Pacific Northwest National Laboratory (PNNL), along with collaborators from Washington State University, created a novel material capable of capturing light energy. This material provides a highly efficient artificial light-harvesting system with potential applications in photovoltaics and bioimaging.

The research provides a foundation for overcoming the difficult challenges involved in the creation of hierarchical functional organic-inorganic hybrid materials. Nature provides beautiful examples of hierarchically structured hybrid materials such as bones and teeth. These materials typically showcase a precise atomic arrangement that allows them to achieve many exceptional properties, such as increased strength and toughness.

PNNL materials scientist Chun-Long Chen, corresponding author of this study, and his collaborators created a new material that reflects the structural and functional complexity of natural hybrid materials. This material combines the programmability of a protein-like synthetic molecule with the complexity of a silicate-based nanocluster to create a new class of highly robust nanocrystals. They then programmed this 2D hybrid material to create a highly efficient artificial light-harvesting system.

“The sun is the most important energy source we have,” said Chen. “We wanted to see if we could program our hybrid nanocrystals to harvest light energy—much like natural plants and photosynthetic bacteria can—while achieving a high robustness and processibility seen in synthetic systems.” The results of this study were published May 14, 2021, in Science Advances

Chun-Long Chen

Materials scientist Chun-Long Chen finds inspiration for new materials in natural structures. Credit: Photo by Andrea Starr | Pacific Northwest National Laboratory

Big dreams, tiny crystals

Though these types of hierarchically structured materials are exceptionally difficult to create, Chen’s multidisciplinary team of scientists combined their expert knowledge to synthesize a sequence-defined molecule capable of forming such an arrangement. The researchers created an altered protein-like structure, called a peptoid, and attached a precise silicate-based cage-like structure (abbreviated POSS) to one end of it. They then found that, under the right conditions, they could induce these molecules to self-assemble into perfectly shaped crystals of 2D nanosheets. This created another layer of cell-membrane-like complexity similar to that seen in natural hierarchical structures while retaining the high stability and enhanced mechanical properties of the individual molecules.

“As a materials scientist, nature provides me with a lot of inspiration” said Chen. “Whenever I want to design a molecule to do something specific, such as act as a drug delivery vehicle, I can almost always find a natural example to model my designs after.”

POSS Peptide

POSS-peptoid nanocrystals form a highly efficient light-harvesting system that absorbs exciting light and emits a fluorescent signal. This system can be used for live cell imaging. Credit: Illustration by Chun-Long Chen and Yang Song | Pacific Northwest National Laboratory

Designing bio-inspired materials

Once the team successfully created these POSS-peptoid nanocrystals and demonstrated their unique properties including high programmability, they then set out to exploit these properties. They programmed the material to include special functional groups at specific locations and intermolecular distances. Because these nanocrystals combine the strength and stability of POSS with the variability of the peptoid building block, the programming possibilities were endless.

Once again looking to nature for inspiration, the scientists created a system that could capture light energy much in the way pigments found in plants do. They added pairs of special “donor” molecules and cage-like structures that could bind an “acceptor” molecule at precise locations within the nanocrystal. The donor molecules absorb light at a specific wavelength and transfer the light energy to the acceptor molecules. The acceptor molecules then emit light at a different wavelength. This newly created system displayed an energy transfer efficiency of over 96%, making it one of the most efficient aqueous light-harvesting systems of its kind reported thus far.

Demonstrating the uses of POSS-peptoids for light harvesting

To showcase the use of this system, the researchers then inserted the nanocrystals into live human cells as a biocompatible probe for live cell imaging. When light of a certain color shines on the cells and the acceptor molecules are present, the cells emit a light of a different color. When the acceptor molecules are absent, the color change is not observed. Though the team only demonstrated the usefulness of this system for live cell imaging so far, the enhanced properties and high programmability of this 2D hybrid material leads them to believe this is one of many applications.

“Though this research is still in its early stages, the unique structural features and high energy transfer of POSS-peptoid 2D nanocrystals have the potential to be applied to many different systems, from photovoltaics to photocatalysis,” said Chen. He and his colleagues will continue to explore avenues for application of this new hybrid material.

Reference: “Programmable two-dimensional nanocrystals assembled from POSS-containing peptoids as efficient artificial light-harvesting systems” by Mingming Wang, Yang Song, Shuai Zhang, Xin Zhang, Xiaoli Cai, Yuehe Lin, James J. De Yoreo and Chun-Long Chen, 14 May 2021, Science Advances.
DOI: 10.1126/sciadv.abg1448

Other authors of this study include: James De Yoreo, Mingming Wang, Shuai Zhang, and Xin Zhang from PNNL and Song Yang and Yuehe Lin from Washington State University. Shuai Zhang, James De Yoreo, and Chun-Long Chen are also affiliated with the University of Washington. This work was supported by the U.S. Department of Energy Basic Energy Sciences program as part of the Center for the Science of Synthesis Across Scales, an Energy Frontier Research Center located at the University of Washington.

Temperature-Dependent Sex Reversals in Bearded Dragon Embryos
Temperature-Dependent Sex Reversals in Bearded Dragon Embryos
Bearded Dragon (Pogona vitticeps)

Native to the arid landscapes of Australia, the central bearded dragon (Pogona vitticeps) is a fascinating species. It has genetic sex determination, but when incubated at high temperatures, genetic males sex reverse and develop as females. Credit: Whiteley SL et al., 2021, PLOS Genetics

Bearded dragon embryos become females either through sex chromosomes or hot temperatures.

Ancient cellular processes are likely involved in temperature-dependent sex reversals.

Bearded dragon embryos can use two different sets of genes to become a female lizard — one activated by the sex chromosomes and the other activated by high temperatures during development. Sarah Whiteley and Arthur Georges of the University of Canberra published these new findings on April 15th, 2021, in the journal PLOS Genetics.

In many reptiles and fish, the sex of a developing embryo depends on the temperature of the surrounding environment. This phenomenon, called temperature-dependent sex determination, was discovered in the 1960s, but the molecular details of how it happens have eluded scientists despite half a century of intensive research. Researchers investigated the biochemical pathways required to make a female in the new study by studying this phenomenon in bearded dragons. Male bearded dragons have ZZ sex chromosomes, while females have ZW sex chromosomes. However, hot temperatures can override ZZ sex chromosomes, causing a male lizard to develop as a female.

Whiteley and Georges compared which genes were turned on during development in bearded dragons with ZW chromosomes compared to ZZ animals exposed to high temperatures. They discovered that initially, different sets of developmental genes are active in the two types of females, but that ultimately the pathways converge to produce ovaries. The findings support recent research proposing that ancient signaling processes inside the cell help translate high temperatures into a sex reversal.

The new study is the first to show that there are two ways to produce an ovary in the bearded dragon and bringing us closer to understanding how temperature determines sex. The study also identifies several candidate genes potentially involved in temperature-dependent sex determination. These findings lay the foundation for future experiments to tease out each gene’s role in sensing temperature and directing sexual development.

Whiteley adds, “The most exciting component of this work is the discovery that the mechanism involves ubiquitous and highly conserved cellular processes, signaling pathways and epigenetic processes of chromatin modification. This new knowledge is bringing us closer to understanding how temperature determines sex, so it is a very exciting time to be in biology.”

Reference: “Two transcriptionally distinct pathways drive female development in a reptile with both genetic and temperature dependent sex determination” by Sarah L. Whiteley, Clare E. Holleley, Susan Wagner, James Blackburn, Ira W. Deveson, Jennifer A. Marshall Graves and Arthur Georges, 15 April 2021, PLOS Genetics.
DOI: 10.1371/journal.pgen.1009465

Funding: This work was supported by a Discovery Grant from the Australian Research Council (DP170101147) awarded to AG (lead), CEH and JMG. SLW was supported by a CSIRO Research Plus Postgraduate Award, Postgraduate-scholarships, and a Research Training Scholarship. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Warp Drives and Negative Energy: Physicists Give Chances of Faster-Than-Light Space Travel a Boost
Warp Drives and Negative Energy: Physicists Give Chances of Faster-Than-Light Space Travel a Boost
Wormhole Passage

Faster than light travel is the only way humans could ever get to other stars in a reasonable amount of time. Credit: NASA

The closest star to Earth is Proxima Centauri. It is about 4.25 light-years away, or about 25 trillion miles (40 trillion km). The fastest ever spacecraft, the now-in-space Parker Solar Probe will reach a top speed of 450,000 mph. It would take just 20 seconds to go from Los Angeles to New York City at that speed, but it would take the solar probe about 6,633 years to reach Earth’s nearest neighboring solar system.

If humanity ever wants to travel easily between stars, people will need to go faster than light. But so far, faster-than-light travel is possible only in science fiction.

In Issac Asimov’s Foundation series, humanity can travel from planet to planet, star to star or across the universe using jump drives. As a kid, I read as many of those stories as I could get my hands on. I am now a theoretical physicist and study nanotechnology, but I am still fascinated by the ways humanity could one day travel in space.

Some characters – like the astronauts in the movies “Interstellar” and “Thor” – use wormholes to travel between solar systems in seconds. Another approach – familiar to “Star Trek” fans – is warp drive technology. Warp drives are theoretically possible if still far-fetched technology. Two recent papers made headlines in March when researchers claimed to have overcome one of the many challenges that stand between the theory of warp drives and reality.

But how do these theoretical warp drives really work? And will humans be making the jump to warp speed anytime soon?

Alcubierre

This 2-dimensional representation shows the flat, unwarped bubble of spacetime in the center where a warp drive would sit surrounded by compressed spacetime to the right (downward curve) and expanded spacetime to the left (upward curve). Credit: AllenMcC/Wikimedia Commons

Compression and expansion

Physicists’ current understanding of spacetime comes from Albert Einstein’s theory of General Relativity. General Relativity states that space and time are fused and that nothing can travel faster than the speed of light. General relativity also describes how mass and energy warp spacetime – hefty objects like stars and black holes curve spacetime around them. This curvature is what you feel as gravity and why many spacefaring heroes worry about “getting stuck in” or “falling into” a gravity well. Early science fiction writers John Campbell and Asimov saw this warping as a way to skirt the speed limit.

What if a starship could compress space in front of it while expanding spacetime behind it? “Star Trek” took this idea and named it the warp drive.

In 1994, Miguel Alcubierre, a Mexican theoretical physicist, showed that compressing spacetime in front of the spaceship while expanding it behind was mathematically possible within the laws of General Relativity. So, what does that mean? Imagine the distance between two points is 10 meters (33 feet). If you are standing at point A and can travel one meter per second, it would take 10 seconds to get to point B. However, let’s say you could somehow compress the space between you and point B so that the interval is now just one meter. Then, moving through spacetime at your maximum speed of one meter per second, you would be able to reach point B in about one second. In theory, this approach does not contradict the laws of relativity since you are not moving faster than light in the space around you. Alcubierre showed that the warp drive from “Star Trek” was in fact theoretically possible.

Proxima Centauri here we come, right? Unfortunately, Alcubierre’s method of compressing spacetime had one problem: it requires negative energy or negative mass.

One Sided Spacetime Curvatures

This 2–dimensional representation shows how positive mass curves spacetime (left side, blue earth) and negative mass curves spacetime in an opposite direction (right side, red earth). Credit: Tokamac/Wikimedia Commons, CC BY-SA

A negative energy problem

Alcubierre’s warp drive would work by creating a bubble of flat spacetime around the spaceship and curving spacetime around that bubble to reduce distances. The warp drive would require either negative mass – a theorized type of matter – or a ring of negative energy density to work. Physicists have never observed negative mass, so that leaves negative energy as the only option.

To create negative energy, a warp drive would use a huge amount of mass to create an imbalance between particles and antiparticles. For example, if an electron and an antielectron appear near the warp drive, one of the particles would get trapped by the mass and this results in an imbalance. This imbalance results in negative energy density. Alcubierre’s warp drive would use this negative energy to create the spacetime bubble.

But for a warp drive to generate enough negative energy, you would need a lot of matter. Alcubierre estimated that a warp drive with a 100-meter bubble would require the mass of the entire visible universe.

In 1999, physicist Chris Van Den Broeck showed that expanding the volume inside the bubble but keeping the surface area constant would reduce the energy requirements significantly, to just about the mass of the sun. A significant improvement, but still far beyond all practical possibilities.

A sci-fi future?

Two recent papers – one by Alexey Bobrick and Gianni Martire and another by Erik Lentz – provide solutions that seem to bring warp drives closer to reality.

Bobrick and Martire realized that by modifying spacetime within the bubble in a certain way, they could remove the need to use negative energy. This solution, though, does not produce a warp drive that can go faster than light.

Independently, Lentz also proposed a solution that does not require negative energy. He used a different geometric approach to solve the equations of General Relativity, and by doing so, he found that a warp drive wouldn’t need to use negative energy. Lentz’s solution would allow the bubble to travel faster than the speed of light.

It is essential to point out that these exciting developments are mathematical models. As a physicist, I won’t fully trust models until we have experimental proof. Yet, the science of warp drives is coming into view. As a science fiction fan, I welcome all this innovative thinking. In the words of Captain Picard, things are only impossible until they are not.

Written by Mario Borunda, Associate Professor of Physics, Oklahoma State University.

Originally published on The Conversation.

Ancient Zircons Date Onset of Plate Tectonics to 3.6 Billion Years Ago – Event Crucial to Making Earth Hospitable to Life
Ancient Zircons Date Onset of Plate Tectonics to 3.6 Billion Years Ago – Event Crucial to Making Earth Hospitable to Life
Zircons Photographed Using Cathodoluminescence

Zircons studied by the research team, photographed using cathodoluminescence, a technique that allowed the team to visualize the interiors of the crystals using a specialized scanning electron microscope. Dark circles on the zircons are the cavities left by the laser that was used to analyze the age and chemistry of the zircons. Scientists led by Michael Ackerson, a research geologist at the Smithsonian’s National Museum of Natural History, provide new evidence that modern plate tectonics, a defining feature of Earth and its unique ability to support life, emerged roughly 3.6 billion years ago. The study, published May 14 in the journal Geochemical Perspective Letters, uses zircons, the oldest minerals ever found on Earth, to peer back into the planet’s ancient past. The team tested more than 3,500 zircons, each just a couple of human hairs wide, by blasting them with a laser and then measuring their chemical composition with a mass spectrometer. These tests revealed the age and underlying chemistry of each zircon. Of the thousands tested, about 200 were fit for study due to the ravages of the billions of years these minerals endured since their creation. Credit: Michael Ackerson, Smithsonian

Earth’s Oldest Minerals Date Onset of Plate Tectonics to 3.6 Billion Years Ago

Ancient zircons from the jack hills of western Australia hone date of an event that was crucial to making the planet hospitable to life.

Scientists led by Michael Ackerson, a research geologist at the Smithsonian’s National Museum of Natural History, provide new evidence that modern plate tectonics, a defining feature of Earth and its unique ability to support life, emerged roughly 3.6 billion years ago.

Earth is the only planet known to host complex life and that ability is partly predicated on another feature that makes the planet unique: plate tectonics. No other planetary bodies known to science have Earth’s dynamic crust, which is split into continental plates that move, fracture, and collide with each other over eons. Plate tectonics afford a connection between the chemical reactor of Earth’s interior and its surface that has engineered the habitable planet people enjoy today, from the oxygen in the atmosphere to the concentrations of climate-regulating carbon dioxide. But when and how plate tectonics got started has remained mysterious, buried beneath billions of years of geologic time.

The study, published May 14, 2021, in the journal Geochemical Perspectives Letters, uses zircons, the oldest minerals ever found on Earth, to peer back into the planet’s ancient past.

Jack Hills of Western Australia

The Jack Hills of Western Australia, where the zircons studied were sampled from 15 grapefruit-sized rocks collected by the research team. Scientists led by Michael Ackerson, a research geologist at the Smithsonian’s National Museum of Natural History, provide new evidence that modern plate tectonics, a defining feature of Earth and its unique ability to support life, emerged roughly 3.6 billion years ago. The study, published May 14 in the journal Geochemical Perspective Letters, uses zircons, the oldest minerals ever found on Earth, to peer back into the planet’s ancient past. Credit: Dustin Trail, University of Rochester

The oldest of the zircons in the study, which came from the Jack Hills of Western Australia, were around 4.3 billion years old — which means these nearly indestructible minerals formed when the Earth itself was in its infancy, only roughly 200 million years old. Along with other ancient zircons collected from the Jack Hills spanning Earth’s earliest history up to 3 billion years ago, these minerals provide the closest thing researchers have to a continuous chemical record of the nascent world.

“We are reconstructing how the Earth changed from a molten ball of rock and metal to what we have today,” Ackerson said. “None of the other planets have continents or liquid oceans or life. In a way, we are trying to answer the question of why Earth is unique, and we can answer that to an extent with these zircons.”

To look billions of years into Earth’s past, Ackerson and the research team collected 15 grapefruit-sized rocks from the Jack Hills and reduced them into their smallest constituent parts — minerals — by grinding them into sand with a machine called a chipmunk. Fortunately, zircons are very dense, which makes them relatively easy to separate from the rest of the sand using a technique similar to gold panning.

Polished Slice of a Rock Collected From Jack Hills of Western Australia

A thin, polished slice of a rock collected from the Jack Hills of Western Australia. Using a special microscope equipped with a polarizing lens, the research team was able to examine the intricate internal structure of quartz that makes up the rock, including unique features that allowed them to identify ancient zircons (magenta mineral in the center of the red-outlined inset image in the right photo). Scientists led by Michael Ackerson, a research geologist at the Smithsonian’s National Museum of Natural History, provide new evidence that modern plate tectonics, a defining feature of Earth and its unique ability to support life, emerged roughly 3.6 billion years ago. The study, published May 14 in the journal Geochemical Perspective Letters, uses zircons, the oldest minerals ever found on Earth, to peer back into the planet’s ancient past. To look billions of years into Earth’s past, Ackerson and the research team collected 15 grapefruit-sized rocks from the Jack Hills and reduced them into their smallest constituent parts — minerals — by grinding them into sand with a machine called a chipmunk. Fortunately, zircons are very dense, which makes them relatively easy to separate from the rest of the sand using a technique similar to gold panning. Credit: Michael Ackerson, Smithsonian

The team tested more than 3,500 zircons, each just a couple of human hairs wide, by blasting them with a laser and then measuring their chemical composition with a mass spectrometer. These tests revealed the age and underlying chemistry of each zircon. Of the thousands tested, about 200 were fit for study due to the ravages of the billions of years these minerals endured since their creation.

“Unlocking the secrets held within these minerals is no easy task,” Ackerson said. “We analyzed thousands of these crystals to come up with a handful of useful data points, but each sample has the potential to tell us something completely new and reshape how we understand the origins of our planet.”

A zircon’s age can be determined with a high degree of precision because each one contains uranium. Uranium’s famously radioactive nature and well-quantified rate of decay allow scientists to reverse engineer how long the mineral has existed.

The aluminum content of each zircon was also of interest to the research team. Tests on modern zircons show that high-aluminum zircons can only be produced in a limited number of ways, which allows researchers to use the presence of aluminum to infer what may have been going on, geologically speaking, at the time the zircon formed.

After analyzing the results of the hundreds of useful zircons from among the thousands tested, Ackerson and his co-authors deciphered a marked increase in aluminum concentrations roughly 3.6 billion years ago.

“This compositional shift likely marks the onset of modern-style plate tectonics and potentially could signal the emergence of life on Earth,” Ackerson said. “But we will need to do a lot more research to determine this geologic shift’s connections to the origins of life.”

The line of inference that links high-aluminum zircons to the onset of a dynamic crust with plate tectonics goes like this: one of the few ways for high-aluminum zircons to form is by melting rocks deeper beneath Earth’s surface.

“It’s really hard to get aluminum into zircons because of their chemical bonds,” Ackerson said. “You need to have pretty extreme geologic conditions.”

Ackerson reasons that this sign that rocks were being melted deeper beneath Earth’s surface meant the planet’s crust was getting thicker and beginning to cool, and that this thickening of Earth’s crust was a sign that the transition to modern plate tectonics was underway.

Prior research on the 4 billion-year-old Acasta Gneiss in northern Canada also suggests that Earth’s crust was thickening and causing rock to melt deeper within the planet.

“The results from the Acasta Gneiss give us more confidence in our interpretation of the Jack Hills zircons,” Ackerson said. “Today these locations are separated by thousands of miles, but they’re telling us a pretty consistent story, which is that around 3.6 billion years ago something globally significant was happening.”

This work is part of the museum’s new initiative called Our Unique Planet, a public-private partnership, which supports research into some of the most enduring and significant questions about what makes Earth special. Other research will investigate the source of Earth’s liquid oceans and how minerals may have helped spark life.

Ackerson said he hopes to follow up these results by searching the ancient Jack Hills zircons for traces of life and by looking at other supremely old rock formations to see if they too show signs of Earth’s crust thickening around 3.6 billion years ago.

Reference: “Emergence of peraluminous crustal magmas and implications for the early Earth” by M.R. Ackerson, D. Trail and J. Buettner, 14 May 2021, Geochemical Perspectives Letters.
DOI: 10.7185/geochemlet.2114

Funding and support for this research were provided by the Smithsonian and the National Aeronautics and Space Administration (NASA).

National Cyber Defense Is a
National Cyber Defense Is a “Wicked” Problem: Why the Colonial Pipeline Ransomware Attack and the SolarWinds Hack Were All but Inevitable
780th Military Intelligence Brigade

Military units like the 780th Military Intelligence Brigade shown here are just one component of U.S. national cyber defense. Credit: Fort George G. Meade

Takeaways:

  • There are no easy solutions to shoring up U.S. national cyber defenses.
  • Software supply chains and private sector infrastructure companies are vulnerable to hackers.
  • Many U.S. companies outsource software development because of a talent shortage, and some of that outsourcing goes to companies in Eastern Europe that are vulnerable to Russian operatives.
  • U.S. national cyber defense is split between the Department of Defense and the Department of Homeland Security, which leaves gaps in authority.

The ransomware attack on Colonial Pipeline on May 7, 2021, exemplifies the huge challenges the U.S. faces in shoring up its cyber defenses. The private company, which controls a significant component of the U.S. energy infrastructure and supplies nearly half of the East Coast’s liquid fuels, was vulnerable to an all-too-common type of cyber attack. The FBI has attributed the attack to a Russian cybercrime gang. It would be difficult for the government to mandate better security at private companies, and the government is unable to provide that security for the private sector.

Similarly, the SolarWinds hack, one of the most devastating cyber attacks in history, which came to light in December 2020, exposed vulnerabilities in global software supply chains that affect government and private sector computer systems. It was a major breach of national security that revealed gaps in U.S. cyber defenses.

These gaps include inadequate security by a major software producer, fragmented authority for government support to the private sector, blurred lines between organized crime and international espionage, and a national shortfall in software and cybersecurity skills. None of these gaps is easily bridged, but the scope and impact of the SolarWinds attack show how critical controlling these gaps is to U.S. national security.

The SolarWinds breach, likely carried out by a group affiliated with Russia’s FSB security service, compromised the software development supply chain used by SolarWinds to update 18,000 users of its Orion network management product. SolarWinds sells software that organizations use to manage their computer networks. The hack, which allegedly began in early 2020, was discovered only in December when cybersecurity company FireEye revealed that it had been hit by the malware. More worrisome, this may have been part of a broader attack on government and commercial targets in the U.S.

The Biden administration is preparing an executive order that is expected to address these software supply chain vulnerabilities. However, these changes, as important as they are, would probably not have prevented the SolarWinds attack. And preventing ransomware attacks like the Colonial Pipeline attack would require U.S. intelligence and law enforcement to infiltrate every organized cyber criminal group in Eastern Europe.

Supply chains, sloppy security and a talent shortage

The vulnerability of the software supply chain – the collections of software components and software development services companies use to build software products – is a well-known problem in the security field. In response to a 2017 executive order, a report by a Department of Defense-led interagency task force identified “a surprising level of foreign dependence,” workforce challenges, and critical capabilities such as printed circuit board manufacturing that companies are moving offshore in pursuit of competitive pricing. All these factors came into play in the SolarWinds attack.

SolarWinds, driven by its growth strategy and plans to spin off its managed service provider business in 2021, bears much of the responsibility for the damage, according to cybersecurity experts. I believe that the company put itself at risk by outsourcing its software development to Eastern Europe, including a company in Belarus. Russian operatives have been known to use companies in former Soviet satellite countries to insert malware into software supply chains. Russia used this technique in the 2017 NotPetya attack that cost global companies more than US$10 billion.

Software supply chain attacks explained.

SolarWinds also failed to practice basic cybersecurity hygiene, according to a cybersecurity researcher.

Vinoth Kumar reported that the password for the software company’s development server was allegedly “solarwinds123,” an egregious violation of fundamental standards of cybersecurity. SolarWinds’ sloppy password management is ironic in light of the Password Management Solution of the Year award the company received in 2019 for its Passportal product.

In a blog post, the company admitted that “the attackers were able to circumvent threat detection techniques employed by both SolarWinds, other private companies, and the federal government.”

The larger question is why SolarWinds, an American company, had to turn to foreign providers for software development. A Department of Defense report about supply chains characterizes the lack of software engineers as a crisis, partly because the education pipeline is not providing enough software engineers to meet demand in the commercial and defense sectors.

There’s also a shortage of cybersecurity talent in the U.S. Engineers, software developers and network engineers are among the most needed skills across the U.S., and the lack of software engineers who focus on the security of software in particular is acute.

Fragmented authority

Though I’d argue SolarWinds has much to answer for, it should not have had to defend itself against a state-orchestrated cyber attack on its own. The 2018 National Cyber Strategy describes how supply chain security should work. The government determines the security of federal contractors like SolarWinds by reviewing their risk management strategies, ensuring that they are informed of threats and vulnerabilities and responding to incidents on their systems.

However, this official strategy split these responsibilities between the Pentagon for defense and intelligence systems and the Department of Homeland Security for civil agencies, continuing a fragmented approach to information security that began in the Reagan era. Execution of the strategy relies on the DOD’s U.S. Cyber Command and DHS’s Cyber and Infrastructure Security Agency. DOD’s strategy is to “defend forward”: that is, to disrupt malicious cyber activity at its source, which proved effective in the runup to the 2018 midterm elections. The Cyber and Infrastructure Security Agency, established in 2018, is responsible for providing information about threats to critical infrastructure sectors.

Neither agency appears to have sounded a warning or attempted to mitigate the attack on SolarWinds. The government’s response came only after the attack. The Cyber and Infrastructure Security Agency issued alerts and guidance, and a Cyber Unified Coordination Group was formed to facilitate coordination among federal agencies.

These tactical actions, while useful, were only a partial solution to the larger, strategic problem. The fragmentation of the authorities for national cyber defense evident in the SolarWinds hack is a strategic weakness that complicates cybersecurity for the government and private sector and invites more attacks on the software supply chain.

A wicked problem

National cyber defense is an example of a “wicked problem,” a policy problem that has no clear solution or measure of success. The Cyberspace Solarium Commission identified many inadequacies of U.S. national cyber defenses. In its 2020 report, the commission noted that “There is still not a clear unity of effort or theory of victory driving the federal government’s approach to protecting and securing cyberspace.”

Many of the factors that make developing a centralized national cyber defense challenging lie outside of the government’s direct control. For example, economic forces push technology companies to get their products to market quickly, which can lead them to take shortcuts that undermine security. Legislation along the lines of the Gramm-Leach-Bliley Act passed in 1999 could help deal with the need for speed in software development. The law placed security requirements on financial institutions. But software development companies are likely to push back against additional regulation and oversight.

The Biden administration appears to be taking the challenge seriously. The president has appointed a national cybersecurity director to coordinate related government efforts. It remains to be seen whether and how the administration will address the problem of fragmented authorities and clarify how the government will protect companies that supply critical digital infrastructure. It’s unreasonable to expect any U.S. company to be able to fend for itself against a foreign nation’s cyberattack.

Steps forward

In the meantime, software developers can apply the secure software development approach advocated by the National Institute of Standards and Technology. Government and industry can prioritize the development of artificial intelligence that can identify malware in existing systems. All this takes time, however, and hackers move quickly.

Finally, companies need to aggressively assess their vulnerabilities, particularly by engaging in more “red teaming” activities: that is, having employees, contractors or both play the role of hackers and attack the company.

Recognizing that hackers in the service of foreign adversaries are dedicated, thorough and not constrained by any rules is important for anticipating their next moves and reinforcing and improving U.S. national cyber defenses. Otherwise, Colonial Pipeline is unlikely to be the last victim of a major attack on U.S. infrastructure and SolarWinds is unlikely to be the last victim of a major attack on the U.S. software supply chain.

Written by Terry Thompson, Adjunct Instructor in Cybersecurity, Johns Hopkins University.

Originally published on The Conversation.

Exploring Earth From Space: Qeshm Island, Iran [Video]
Exploring Earth From Space: Qeshm Island, Iran [Video]
Qeshm Island, Iran

Credit: Contains modified Copernicus Sentinel data (2020), processed by ESA, CC BY-SA 3.0 IGO

The Copernicus Sentinel-2 mission takes us over Qeshm Island – the largest island in Iran.

Qeshm Island lies in the Strait of Hormuz, parallel to the Iranian coast from which it is separated by the Clarence Strait (Khuran). With an area of around 1200 sq km, the island has an irregular outline and shape often compared to that of an arrow. The island is approximately 135 km long and spans around 40 km at its widest point.

The image shows the largely arid land surfaces on both Qeshm Island and mainland Iran. The island generally has a rocky coastline except for the sandy bays and mud flats that fringe the northwest part of the island.

The Hara Forest Protected Area, a network of shallow waterways and forest, can be seen clearly in the image, between Qeshm Island and the mainland. Hara, which means ‘grey mangrove’ in the local language, is a large mangrove forest and protected area that brings more than 150 species of migrating birds during spring, including the great egret and the western reef heron. The forest also hosts sea turtles and aquatic snakes.

The dome-shaped Namakdan mountain is visible in the southwest part of the island and features the Namakdan Cave – one of the longest salt caves in the world. With a length of six kilometers, the cave is filled with salt sculptures, salt rivers, and salt megadomes.

The water south of Qeshm Island appears particularly dark, while lighter, turquoise colors can be seen in the left of the image most likely due to shallow waters and sediment content. Several islands can be seen in the waters including Hengam Island, visible just south of Qeshm, Larak Island and Hormuz Island which is known for its red, edible soil.

Several cloud formations can be seen in the bottom-right of the image, as well as a part of the Musandam Peninsula, the northeastern tip of the Arabian Peninsula. The peninsula’s jagged coastline features fjordlike inlets called ‘khors’ and its waters are home to dolphins and other marine life.

Data from the Copernicus Sentinel-2 mission can help monitor changes in urban expansion, land-cover change and agriculture monitoring. The mission’s frequent revisits over the same area and high spatial resolution also allow changes in inland water bodies to be closely monitored.

Surprise Twist Suggests Stars Grow Competitively – Unprecedented High-Resolution Map of the Orion Nebula Cluster
Surprise Twist Suggests Stars Grow Competitively – Unprecedented High-Resolution Map of the Orion Nebula Cluster
Orion Nebula Cluster Map

Unprecedented high-resolution map of the Orion Nebula Cluster showing newborn stars (orange squares), gravitationally collapsing gas cores (red circles), and non-collapsing gas cores (blue crosses). Credit: Takemura et al.

A survey of star formation activity in the Orion Nebula Cluster found similar mass distributions for newborn stars and dense gas cores, which may evolve into stars. Counterintuitively, this means that the amount of gas a core accretes as it develops, and not the initial mass of the core, is the key factor in deciding the final mass of the produced star.

The Universe is populated with stars of various masses. Dense cores in clouds of interstellar gas collapse under their own gravity to form stars, but what determines the final mass of the star remains an open question. There are two competing theories. In the core-collapse model, larger stars form from larger cores. In the competitive accretion model, all cores start out about the same mass but accrete different amounts of gas from the surroundings as they grow.

To distinguish between these two scenarios, a research team led by Hideaki Takemura at the National Astronomical Observatory of Japan created a map of the Orion Nebula Cluster where new stars are forming, based on data from the American CARMA interferometer and NAOJ’s own Nobeyama 45-m Radio Telescope. Thanks to the unprecedented high resolution of the map, the team was able to compare the masses of the newly formed stars and gravitationally collapsing dense cores. They found that the mass distributions are similar for the two populations. They also found many smaller cores which don’t have strong enough gravity to contract into stars.

One would think that similar mass distributions for prestellar cores and newborn stars would favor the core-collapse model, but actually, because it is impossible for a core to impart all of its mass to a new star, this shows that continued gas inflow is an important factor, favoring the competitive accretion model.

Now the team will expand their map using additional data from CARMA and the Nobeyama 45-m Radio Telescope to see if the results from the Orion Nebula Cluster hold true for other regions.

Reference: “The Core Mass Function in the Orion Nebula Cluster Region: What Determines the Final Stellar Masses?” by Hideaki Takemura, Fumitaka Nakamura, Shuo Kong, Héctor G. Arce, John M. Carpenter, Volker Ossenkopf-Okada, Ralf Klessen, Patricio Sanhueza, Yoshito Shimajiri, Takashi Tsukagoshi, Ryohei Kawabe, Shun Ishii, Kazuhito Dobashi, Tomomi Shimoikura, Paul F. Goldsmith, Álvaro Sánchez-Monge, Jens Kauffmann, Thushara G. S. Pillai, Paolo Padoan, Adam Ginsberg, Rowan J. Smith, John Bally, Steve Mairs, Jaime E. Pineda, Dariusz C. Lis, Blakesley Burkhart, Peter Schilke, Hope How-Huan Chen, Andrea Isella, Rachel K. Friesen, Alyssa A. Goodman and Doyal A. Harper, 22 March 2021, Astrophysical Journal Letters.
DOI: 10.3847/2041-8213/abe7dd

Funding: Japan Society for the Promotion of Science, Deutsche Forschungsgemeinschaft, Heidelberg Cluster of Excellence STRUCTURES, European Research Council, Spanish Ministry of Economy and Competitiveness, the State Agency for Research of the Spanish Ministry of Science and Innovation

Scientists Rewrite the Genesis of Mosquito-Borne Viruses – Discovery Enables Better Designed Vaccines
Scientists Rewrite the Genesis of Mosquito-Borne Viruses – Discovery Enables Better Designed Vaccines
Flavivirus

Cryo-electron microscopy reconstruction of Binjari virus. The projecting spikes are a typical feature of immature flaviviruses such as dengue virus but reveal an unexpected organization. Credit:
Associate Professor Fasseli Coulibaly

Better designed vaccines for insect-spread viruses like dengue and Zika now possible.

Better designed vaccines for insect-spread viruses like dengue and Zika are likely after researchers discovered models of immature flavivirus particles were originally misinterpreted.

Researchers from The University of Queensland and Monash University have now determined the first complete 3D molecular structure of the immature flavivirus, revealing an unexpected organization.

UQ researcher Associate Professor Daniel Watterson said the team was studying the insect-specific Binjari virus when they made the discovery.

“We were using Australia’s safe-to-handle Binjari virus, which we combine with more dangerous viral genes to make safer and more effective vaccines,” Dr. Watterson said.

“But when analyzing Binjari we could clearly see that the molecular structure we’ve all been working from since 2008 wasn’t quite correct.

“Imagine trying to build a house when your blueprints are wrong – that’s exactly what it’s like when you’re attempting to build effective vaccines and treatments and your molecular ‘map’ is not quite right.”

The team used a technique known as cryogenic electron microscopy to image the virus, generating high-resolution data from Monash’s Ramaciotti Centre for Cryo-Electron Microscopy facility.

With thousands of collected two-dimensional images of the virus, the researchers then combined them using a high-performance computing platform called ‘MASSIVE’ to construct a high-resolution 3D structure.

Monash’s Associate Professor Fasséli Coulibaly, a co-leader of the study, said the revelation could lead to new and better vaccines for flaviviruses, which have a huge disease burden globally.

“Flaviviruses are globally distributed and dengue virus alone infects around 400 million people annually,” Dr. Coulibaly said.

“They cause a spectrum of potentially severe diseases including hepatitis, vascular shock syndrome, encephalitis, acute flaccid paralysis, congenital abnormalities, and fetal death.

“This structure defines the exact wiring of the immature virus before it becomes infectious, and we now have a better understanding of the levers and pulleys involved in viral assembly.

“This is a continuation of fundamental research by us and others and, without this hard-won basic knowledge, we wouldn’t have the solid foundation needed to design tomorrow’s treatments.”

Reference: “The structure of an infectious immature flavivirus redefines viral architecture and maturation” by Natalee D. Newton, Joshua M. Hardy, Naphak Modhiran, Leon E. Hugo, Alberto A. Amarilla, Summa Bibby, Hariprasad Venugopal, Jessica J. Harrison, Renee J. Traves,§, Roy A. Hall, Jody Hobson-Peters, Fasséli Coulibaly and Daniel Watterson, 14 May 2021, Science Advances.
DOI: 10.1126/sciadv.abe4507

The joint first authors are Dr. Natalee Newton from UQ’s Watterson lab and Dr. Joshua Hardy from Coulibaly lab at the Monash Biomedicine Discovery Institute.

NASA's Webb Space Telescope to Probe the Outer Realm of Exoplanetary Systems, Hunt for New Worlds
NASA’s Webb Space Telescope to Probe the Outer Realm of Exoplanetary Systems, Hunt for New Worlds
HR 8799 Exoplanet System

Left: This is an image of the star HR 8799 taken by Hubble’s Near Infrared Camera and Multi-Object Spectrometer (NICMOS) in 1998. A mask within the camera (coronagraph) blocks most of the light from the star. Astronomers also used software to digitally subtract more starlight. Nevertheless, scattered light from HR 8799 dominates the image, obscuring four faint planets later discovered from ground-based observations. Right: A re-analysis of NICMOS data in 2011 uncovered three of the exoplanets, which were not seen in the 1998 images. Webb will probe the planets’ atmospheres at infrared wavelengths astronomers have rarely used to image distant worlds. Credit: NASA, ESA, and R. Soummer (STScI)

NASA’s Webb to Study Young Exoplanets on the Edge

Webb will probe the outer realm of exoplanetary systems, investigating known planets and hunting for new worlds.

Although more than 4,000 planets have been discovered around other stars, they don’t represent the wide diversity of possible alien worlds. Most of the exoplanets detected so far are so-called “star huggers”: they orbit so close to their host stars that they complete an orbit in days or weeks. These are the easiest to find with current detection techniques.

But there’s a vast, mostly uncharted landscape to hunt for exoplanets in more distant orbits. Astronomers have only begun to explore this frontier. The planets are far enough away from their stars that telescopes equipped with masks to block out a star’s blinding glare can see the planets directly. The easiest planets to spot are hot, newly formed worlds. They are young enough that they still glow in infrared light with the heat from their formation.

This outer realm of exoplanetary systems is an ideal hunting ground for NASA’s upcoming James Webb Space Telescope. Webb will probe the atmospheres of nearby known exoplanets, such as HR 8799 and 51 Eridani b, at infrared wavelengths. Webb also will hunt for other distant worlds—possibly down to Saturn-size—on the outskirts of planetary systems that cannot be detected with ground-based telescopes.

Positional Schematic of the Members of the HR 8799 Exoplanet System

This schematic shows the positions of the four exoplanets orbiting far away from the nearby star HR 8799. The orbits appear elongated because of a slight tilt of the plane of the orbits relative to our line of sight. The size of the HR 8799 planetary system is comparable to our solar system, as indicated by the orbit of Neptune, shown to scale. Credit: NASA, ESA, and R. Soummer (STScI)

Before planets around other stars were first discovered in the 1990s, these far-flung exotic worlds lived only in the imagination of science fiction writers.

But even their creative minds could not have conceived of the variety of worlds astronomers have uncovered. Many of these worlds, called exoplanets, are vastly different from our solar system’s family of planets. They range from star-hugging “hot Jupiters” to oversized rocky planets dubbed “super Earths.” Our universe apparently is stranger than fiction.

Seeing these distant worlds isn’t easy because they get lost in the glare of their host stars. Trying to detect them is like straining to see a firefly hovering next to a lighthouse’s brilliant beacon.

That’s why astronomers have identified most of the more than 4,000 exoplanets found so far using indirect techniques, such as through a star’s slight wobble or its unexpected dimming as a planet passes in front of it, blocking some of the starlight.

These techniques work best, however, for planets orbiting close to their stars, where astronomers can detect changes over weeks or even days as the planet completes its racetrack orbit. But finding only star-skimming planets doesn’t provide astronomers with a comprehensive picture of all the possible worlds in star systems.

Exoplanet 51 Eridani b

This discovery image of a Jupiter-sized extrasolar planet orbiting the nearby star 51 Eridani was taken in near-infrared light in 2014 by the Gemini Planet Imager. The bright central star is hidden behind a mask in the center of the image to enable the detection of the exoplanet, which is 1 million times fainter than 51 Eridani. The exoplanet is on the outskirts of the planetary system 11 billion miles from its star. Webb will probe the planet’s atmosphere at infrared wavelengths astronomers have rarely used to image distant worlds. Credit: International Gemini Observatory/NOIRLab/NSF/AURA, J. Rameau (University of Montreal), and C. Marois (National Research Council of Canada Herzberg)

Another technique researchers use in the hunt for exoplanets, which are planets orbiting other stars, is one that focuses on planets that are farther away from a star’s blinding glare. Scientists, using specialized imaging techniques that block out the glare from the star, have uncovered young exoplanets that are so hot they glow in infrared light. In this way, some exoplanets can be directly seen and studied.

NASA’s upcoming James Webb Space Telescope will help astronomers probe farther into this bold new frontier. Webb, like some ground-based telescopes, is equipped with special optical systems called coronagraphs, which use masks designed to block out as much starlight as possible to study faint exoplanets and to uncover new worlds.

Two targets early in Webb’s mission are the planetary systems 51 Eridani and HR 8799. Out of the few dozen directly imaged planets, astronomers plan to use Webb to analyze in detail the systems that are closest to Earth and have planets at the widest separations from their stars. This means that they appear far enough away from a star’s glare to be directly observed. The HR 8799 system resides 133 light-years and 51 Eridani 96 light-years from Earth.

Webb’s Planetary Targets

Two observing programs early in Webb’s mission combine the spectroscopic capabilities of the Near Infrared Spectrograph (NIRSpec ) and the imaging of the Near Infrared Camera (NIRCam) and Mid-Infrared Instrument (MIRI) to study the four giant planets in the HR 8799 system. In a third program, researchers will use NIRCam to analyze the giant planet in 51 Eridani.

The four giant planets in the HR 8799 system are each roughly 10 Jupiter masses. They orbit more than 14 billion miles from a star that is slightly more massive than the Sun. The giant planet in 51 Eridani is twice the mass of Jupiter and orbits about 11 billion miles from a Sun-like star. Both planetary systems have orbits oriented face-on toward Earth. This orientation gives astronomers a unique opportunity to get a bird’s-eye view down on top of the systems, like looking at the concentric rings on an archery target.

Many exoplanets found in the outer orbits of their stars are vastly different from our solar system planets. Most of the exoplanets discovered in this outer region, including those in HR 8799, are between 5 and 10 Jupiter masses, making them the most massive planets ever found to date.

These outer exoplanets are relatively young, from tens of millions to hundreds of millions of years old—much younger than our solar system’s 4.5 billion years. So they’re still glowing with heat from their formation. The images of these exoplanets are essentially baby pictures, revealing planets in their youth.

This video shows four Jupiter-sized exoplanets orbiting billions of miles away from their star in the nearby HR 8799 system. The planetary system is oriented face-on toward Earth, giving astronomers a unique bird’s-eye view of the planets’ motion. The exoplanets are orbiting so far away from their star that they take anywhere from decades to centuries to complete an orbit. The video consists of seven images of the system taken over a seven-year period with the W.M. Keck Observatory on Mauna Kea, Hawaii. Keck’s coronagraph blocks out most of the starlight so that the much fainter and smaller exoplanets can be seen. Credit: Jason Wang (Caltech) and Christian Marois (NRC Herzberg)

Webb will probe into the mid-infrared, a wavelength range astronomers have rarely used before to image distant worlds. This infrared “window” is difficult to observe from the ground because of thermal emission from—and absorption in—Earth’s atmosphere.

“Webb’s strong point is the uninhibited light coming through space in the mid-infrared range,” said Klaus Hodapp of the University of Hawaii in Hilo, lead investigator of the NIRSpec observations of the HR 8799 system. “Earth’s atmosphere is pretty difficult to work through. The major absorption molecules in our own atmosphere prevent us from seeing interesting features in planets.”

The mid-infrared “is the region where Webb really will make seminal contributions to understanding what are the particular molecules, what are the properties of the atmosphere that we hope to find which we don’t really get just from the shorter, near-infrared wavelengths,” said Charles Beichman of NASA’s Jet Propulsion Laboratory in Pasadena, California, lead investigator of the NIRCam and MIRI observations of the HR 8799 system. “We’ll build on what the ground-based observatories have done, but the goal is to expand on that in a way that would be impossible without Webb.”

How Do Planets Form?

One of the researchers’ main goals in both systems is to use Webb to help determine how the exoplanets formed. Were they created through a buildup of material in the disk surrounding the star, enriched in heavy elements such as carbon, just as Jupiter probably did? Or, did they form from the collapse of a hydrogen cloud, like a star, and become smaller under the relentless pull of gravity?

Atmospheric makeup can provide clues to a planet’s birth. “One of the things we’d like to understand is the ratio of the elements that have gone into the formation of these planets,” Beichman said. “In particular, carbon versus oxygen tells you quite a lot about where the gas that formed the planet comes from. Did it come from a disk that accreted a lot of the heavier elements or did it come from the interstellar medium? So it’s what we call the carbon-to-oxygen ratio that is quite indicative of formation mechanisms.”

This video shows a Jupiter-sized exoplanet orbiting far away—roughly 11 billion miles—from a nearby, Sun-like star, 51 Eridani. The planetary system is oriented face-on toward Earth, giving astronomers a unique bird’s-eye view of the planet’s motion. The video consists of five images taken over four years with the Gemini South Telescope’s Gemini Planet Imager, in Chile. Gemini’s coronagraph blocks out most of the starlight so that the much fainter and smaller exoplanet can be seen. Credit: Jason Wang (Caltech)/Gemini Planet Imager Exoplanet Survey

To answer these questions, the researchers will use Webb to probe deeper into the exoplanets’ atmospheres. NIRCam, for example, will measure the atmospheric fingerprints of elements like methane. It also will look at cloud features and the temperatures of these planets. “We already have a lot of information at these near-infrared wavelengths from ground-based facilities,” said Marshall Perrin of the Space Telescope Science Institute in Baltimore, Maryland, lead investigator of NIRCam observations of 51 Eridani b. “But the data from Webb will be much more precise, much more sensitive. We’ll have a more complete set of wavelengths, including filling in gaps where you can’t get those wavelengths from the ground.”

The astronomers will also use Webb and its superb sensitivity to hunt for less-massive planets far from their star. “From ground-based observations, we know that these massive planets are relatively rare,” Perrin said. “But we also know that for the inner parts of systems, lower-mass planets are dramatically more common than larger-mass planets. So the question is, does it also hold true for these further separations out?” Beichman added, “Webb’s operation in the cold environment of space allows a search for fainter, smaller planets, impossible to detect from the ground.”

Another goal is understanding how the myriad planetary systems discovered so far were created.

“I think what we are finding is that there is a huge diversity in solar systems,” Perrin said. “You have systems where you have these hot Jupiter planets in very close orbits. You have systems where you don’t. You have systems where you have a 10-Jupiter-mass planet and ones in which you have nothing more massive than several Earths. We ultimately want to understand how the diversity of planetary system formation depends on the environment of the star, the mass of the star, all sorts of other things and eventually through these population-level studies, we hope to place our own solar system in context.”

The NIRSpec spectroscopic observations of HR 8799 and the NIRCam observations of 51 Eridani are part of the Guaranteed Time Observations programs that will be conducted shortly after Webb’s launch later this year. The NIRCam and MIRI observations of HR 8799 is a collaboration of two instrument teams and is also part of the Guaranteed Time Observations program.

The James Webb Space Telescope will be the world’s premier space science observatory when it launches in 2021. Webb will solve mysteries in our solar system, look beyond to distant worlds around other stars, and probe the mysterious structures and origins of our universe and our place in it. Webb is an international program led by NASA with its partners, ESA (European Space Agency) and the Canadian Space Agency.