Les éruptions solaires en laboratoire révèlent des indices sur le mécanisme derrière les rafales de particules à haute énergie

En simulant des éruptions solaires à l’échelle de la taille d’une banane, des chercheurs de Caltech ont analysé le processus par lequel ces explosions massives projettent des particules énergétiques et des rayons X potentiellement nocifs dans le cosmos. L’observatoire de la dynamique solaire de la NASA a capturé cette image d’une éruption solaire…

Source link

latest news

european news

actualites en français

L’immunothérapie augmente considérablement la survie des personnes atteintes de granulomatose lymphomatoïde

Les résultats d’un essai clinique mené par des chercheurs des National Institutes of Health (NIH) montrent que les personnes atteintes de granulomatose lymphomatoïde de bas grade traitées avec l’interféron alfa-2b, un type d’immunothérapie, peuvent vivre pendant des décennies après le diagnostic. Médecin exécutant une opération -…

Source link

latest news

european news

actualites en français

In the US:Robots have begun to replace missing employees in restaurants
In the US:Robots have begun to replace missing employees in restaurants

After months of waiting for employees to return to work, some businesses come to their senses, writes Inchider.

As of June, only 10% of busy jobs in the United States have an urgent need for employment, according to a study by India. Whether it’s because of the worries about the child, look after the children, enough savings or the additional benefits for unemployment, the unemployed are not happy.

And here comes the QR code. The technology that helps the customers to save the needs of the customer, which is to give the menu to the customers, is based on the current wave. There are other signs that the workers will replace the hopa, at least in the pestopanite.

Because of this, some of them are starting to turn to technology, with which to replace some of the employees.

Srasker Varrel has a mobile application, with which customers can pay for drinks and drinks from their customers; MSDonald has started to run automated car ranges at 10 locations in Chicago; Dave & Wuter plans to expand the possibilities.

Use it clearly. Automated solutions are often a one-time investment and increase productivity. They don’t need special bonuses, just to start work.

Economic data suggest that the trend has intensified over the past few months. Productivity fell by 5.4% in the first quarter of 2021 – the fastest pace in 20 years.

Businesses that are starting to adopt new technologies do not believe that workers will replace them.

On the other hand, people have long feared that new technologies will destroy many jobs and increase wages. This type of thinking is summarized by the term Lydia – a social movement in the United Kingdom from the first years of the Industrial Revolution in the 19th century. The workers, part of it, are destroying the machines that have taken their jobs.

Are the technologies bad? The utilization of technological innovations has increased productivity over the last few hundred years of economic history. It is the higher productivity that gives employees the opportunity to pay higher wages. The higher rewards also increase economic activity, which increases the market share of the market.

Economist Hoa Smith writes that in the previous period of innovations – since the Indian Revolution – we have not destroyed jobs, but.

Secret Behind Jupiter’s “Energy Crisis” Revealed – Puzzled Astronomers for Decades
Secret Behind Jupiter’s “Energy Crisis” Revealed – Puzzled Astronomers for Decades
Jupiter Atmospheric Heating

Jupiter is shown in visible light for context underneath an artistic impression of the Jovian upper atmosphere’s infrared glow. The brightness of this upper atmosphere layer corresponds to temperatures, from hot to cold, in this order: white, yellow, bright red and lastly, dark red. The aurorae are the hottest regions and the image shows how heat may be carried by winds away from the aurora and cause planet-wide heating. Credit: J. O’Donoghue (JAXA)/Hubble/NASA/ESA/A. Simon/J. Schmidt

New research published in Nature has revealed the solution to Jupiter’s “energy crisis,” which has puzzled astronomers for decades.

Space scientists at the University of Leicester worked with colleagues from the Japanese Space Agency (JAXA), Boston University, NASA’s Goddard Space Flight Center and the National Institute of Information and Communications Technology (NICT) to reveal the mechanism behind Jupiter’s atmospheric heating.

Now, using data from the Keck Observatory in Hawai’i, astronomers have created the most-detailed yet global map of the gas giant’s upper atmosphere, confirming for the first time that Jupiter’s powerful aurorae are responsible for delivering planet-wide heating.

Dr. James O’Donoghue is a researcher at JAXA and completed his PhD at Leicester, and is lead author for the research paper. He said:

“We first began trying to create a global heat map of Jupiter’s uppermost atmosphere at the University of Leicester. The signal was not bright enough to reveal anything outside of Jupiter’s polar regions at the time, but with the lessons learned from that work we managed to secure time on one of the largest, most competitive telescopes on Earth some years later.

“Using the Keck telescope we produced temperature maps of extraordinary detail. We found that temperatures start very high within the aurora, as expected from previous work, but now we could observe that Jupiter’s aurora, despite taking up less than 10% of the area of the planet, appear to be heating the whole thing.

“This research started in Leicester and carried on at Boston University and NASA before ending at JAXA in Japan. Collaborators from each continent working together made this study successful, combined with data from NASA’s Juno spacecraft in orbit around Jupiter and JAXA’s Hisaki spacecraft, an observatory in space.”

Dr. Tom Stallard and Dr. Henrik Melin are both part of the School of Physics and Astronomy at the University of Leicester. Dr. Stallard added:

“There has been a very long-standing puzzle in the thin atmosphere at the top of every Giant Planet within our solar system. With every Jupiter space mission, along with ground-based observations, over the past 50 years, we have consistently measured the equatorial temperatures as being much too hot.

“This ‘energy crisis’ has been a long standing issue – do the models fail to properly model how heat flows from the aurora, or is there some other unknown heat source near the equator?

“This paper describes how we have mapped this region in unprecedented detail and have shown that, at Jupiter, the equatorial heating is directly associated with auroral heating.”

Aurorae occur when charged particles are caught in a planet’s magnetic field. These spiral along the field lines towards the planet’s magnetic poles, striking atoms and molecules in the atmosphere to release light and energy.

On Earth, this leads to the characteristic light show that forms the Aurora Borealis and Australis. At Jupiter, the material spewing from its volcanic moon, Io, leads to the most powerful aurora in the Solar System and enormous heating in the polar regions of the planet.

Although the Jovian aurorae have long been a prime candidate for heating the planet’s atmosphere, observations have previously been unable to confirm or deny this until now.

Previous maps of the upper atmospheric temperature were formed using images consisting of only several pixels. This is not enough resolution to see how the temperature might be changed across the planet, providing few clues as to the origin of the extra heat.

Researchers created five maps of the atmospheric temperature at different spatial resolutions, with the highest resolution map showing an average temperature measurement for squares two degrees longitude ‘high’ by two degrees latitude ‘wide’.

The team scoured more than 10,000 individual data points, only mapping points with an uncertainty of less than five percent.

Models of the atmospheres of gas giants suggest that they work like a giant refrigerator, with heat energy drawn from the equator towards the pole, and deposited in the lower atmosphere in these pole regions.

These new findings suggest that fast-changing aurorae may drive waves of energy against this poleward flow, allowing heat to reach the equator.

Observations also showed a region of localized heating in the sub-auroral region that could be interpreted as a limited wave of heat propagating equatorward, which could be interpreted as evidence of the process driving heat transfer.

Planetary research at the University of Leicester spans the breadth of Jovian system, from the planet’s magnetosphere and atmosphere, out to its diverse collection of satellites.

Leicester researchers are members of the Juno mission made up of a global team astronomers observing the giant planet, and are leading Jupiter observations from the forthcoming James Webb Space Telescope. Leicester also plays a leading role in science and instrumentation on the European Space Agency (ESA)’s Jupiter Icy Moons Explorer (JUICE), due for launch in 2022.

Reference: “Global upper-atmospheric heating on Jupiter by the polar aurorae” by J. O’Donoghue, L. Moore, T. Bhakyapaibul, H. Melin, T. Stallard, J. E. P. Connerney and C. Tao, 4 August 2021, Nature.
DOI: 10.1038/s41586-021-03706-w

New research published in Nature has revealed the solution to Jupiter’s ‘energy crisis’, which has puzzled astronomers for decades.

Free New AI Tools Accelerate Functional Electronic Materials Discovery
Free New AI Tools Accelerate Functional Electronic Materials Discovery
New AI-based Tools to Accelerate Functional Electronic Materials Discovery

Using machine-learning tools, the scientists identified important features to characterize materials that exhibit a metal-insulator transition. Credit: Northwestern University and the Massachusetts Institute of Technology

The work could allow scientists to accelerate the discovery of materials showing a metal-insulator transition.

An interdisciplinary team of scientists from Northwestern Engineering and the Massachusetts Institute of Technology has used artificial intelligence (AI) techniques to build new, free, and easy-to-use tools that allow scientists to accelerate the rate of discovery and study of materials that exhibit a metal-insulator transition (MIT), as well as identify new features that can describe this class of materials.

One of the keys to making microelectronic devices faster and more energy efficient, as well as designing new computer architectures, is the discovery of new materials with tunable electronic properties. The electrical resistivity of MITs may exhibit metallic or insulating electronic behavior, depending on the properties of the environment.

Although some materials that exhibit MITs have already been implemented in electronic devices, only fewer than 70 with this property are known, and even fewer exhibit the performance necessary for integration into new electronic devices. Further, these materials switch electrically due to a variety of mechanisms, which makes obtaining a general understanding of this class of materials difficult.

“By providing a database, online classifier, and new set of features, our work opens new pathways to the understanding and discovery in this class of materials,” said James Rondinelli, Morris E. Fine Professor in Materials and Manufacturing at the McCormick School of Engineering and the study’s corresponding primary investigator. “Further, this work can be used by other scientists and applied to other material classes to accelerate the discovery and understanding of other classes of quantum materials.”

“One of the key elements of our tools and models is that they are accessible to a wide audience; scientists and engineers don’t need to understand machine learning to use them, just as one doesn’t need a deep understanding of search algorithms to navigate the internet,” said Alexandru Georgescu, postdoctoral researcher in the Rondinelli lab who is the study’s first co-author.

The team presented its research in the paper “Database, Features, and Machine Learning Model to Identify Thermally Driven Metal–Insulator Transition Compounds,” published on July 6, 2021, in the academic journal Chemistry of Materials

Daniel Apley, professor of industrial engineering and management sciences at Northwestern Engineering, was a co-primary investigator. Elsa A. Olivetti, Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering at the Massachusetts Institute of Technology, was also a co-primary investigator.

Using their existing knowledge of MIT materials, combined with Natural Language Processing (NLP), the researchers scoured existing literature to identify the 60 known MIT compounds, as well as 300 materials that are similar in chemical composition but do not show an MIT. The team has provided the resulting materials – as well as features it’s identified as relevant – to scientists as a freely available database for public use.

Then using machine-learning tools, the scientists identified important features to characterize these materials. They confirmed the importance of certain features, such as the distances between transition metal ions or the electrostatic repulsion between some of them known, as well as the accuracy of the model. They also identified new, previously underappreciated features, such as how different the atoms are in size from each other, or measures of how ionic or covalent the inter-atomic bonds are. These features were found to be critical in developing a reliable machine learning model for MIT materials, which has been packaged into an openly accessible format.

“This free tool allows anyone to quickly obtain probabilistic estimates on whether the material they are studying is a metal, insulator, or a metal-insulator transition compound,” Apley said.

Reference: “Database, Features, and Machine Learning Model to Identify Thermally Driven Metal–Insulator Transition Compounds” by Alexandru B. Georgescu, Peiwen Ren, Aubrey R. Toland, Shengtong Zhang, Kyle D. Miller, Daniel W. Apley, Elsa A. Olivetti, Nicholas Wagner and James M. Rondinelli, 6 July 2021, Chemistry of Materials.
DOI: 10.1021/acs.chemmater.1c00905

Work on this study was born from projects within the Predictive Science and Engineering Design (PS&ED) interdisciplinary cluster program sponsored by The Graduate School at Northwestern. The study was also supported by funding from the Designing Materials to Revolutionize and Engineer our Future (DMREF) program of the National Science Foundation and the Advanced Research Projects Agency – Energy’s (ARPA-E) DIFFERENTIATE program, which seeks to use emerging AI technologies to tackle major energy and environmental challenges.

Scientists have determined why Mercury has such a large core
Scientists have determined why Mercury has such a large core

A new study has emerged where scientists have presented a new explanation for the large core of Mercury. This is not related to collisions during the formation of the solar system.

The new study refutes the hypothesis of why Mercury has a large core compared to the mantle (the layer between the planet’s core and crust). For decades, scientists believed that as a result of collisions with other bodies during the formation of our solar system, much of Mercury’s rocky mantle collapsed, leaving a large, dense, metallic core inside. But new research shows that collisions are not to blame – solar magnetism is to blame.

William McDonough, professor of geology at the University of Maryland, and Takashi Yoshizaki of Tohoku University developed a model showing that the density, mass, and iron content of a rocky planet’s core depends on its distance from the sun’s magnetic field. An article describing the discovery appeared in the journal Progress in Earth and Planetary Science.

“The four planets in our solar system — Mercury, Venus, Earth and Mars — are made up of varying proportions of metal and stone,” McDonough said. – There is a tendency for the metal content in the core to decrease as the planets move away from the Sun. Our paper explains how this happened by showing that the distribution of raw materials in the early forming solar system was controlled by the sun’s magnetic field. ”

McDonough’s new model shows that during the early formation of the solar system, when the young Sun was surrounded by a swirling cloud of dust and gas, iron grains were pulled toward the center by the Sun’s magnetic field. When planets closer to the Sun began to form from clumps of this dust and gas, they included more iron in their cores than those that were farther away.

The researchers found that the density and proportion of iron in the core of a rocky planet correlates with the strength of the magnetic field around the sun during planet formation. In a new study, they suggest that magnetism should be factored into future attempts to describe the composition of rocky planets, including those outside our solar system.

The composition of a planet’s core is important to its potential to support life. On Earth, for example, a molten iron core creates a magnetosphere that protects the planet from cancer-causing cosmic rays. The core also contains most of the phosphorus, an essential nutrient for maintaining carbon-based life.

Using existing models of planetary formation, McDonough determined the rate at which gas and dust were being pulled into the center of our solar system during its formation. He took into account the magnetic field that the sun should have generated when it appeared, and calculated how this magnetic field would pull iron through a cloud of dust and gas.

New Insights Into How Central Supermassive Black Holes Influence the Evolution of Their Host Galaxy
New Insights Into How Central Supermassive Black Holes Influence the Evolution of Their Host Galaxy

Galaxy Universe Concept

Emirati national Aisha Al Yazeedi, a research scientist at the NYU Abu Dhabi (NYUAD) Center for Astro, Particle, and Planetary Physics, has published her first research paper, featuring some key findings on the evolution of galaxies.

Galaxies eventually undergo a phase in which they lose most of their gas, which results in a change into their properties over the course of their evolution. Current models for galaxy evolution suggest this should eventually happen to all galaxies, including our own Milky Way; Al Yazeedi and her team are delving into this process.

Blob Source Extracted From DESI

Composite RGB image of the Blob Source extracted from the DESI Legacy Imaging Surveys (Dey et al.(2019), legacysurvey.org). MaNGA _eld of view is shown in orange. Gray box corresponds to the GMOS _eld of view. Credit: Dey et al.(2019), legacysurvey.org

Commenting on the findings, Al Yazeedi said: “The evolution of galaxies is directly linked to the activity of their central supermassive black hole (SMBH). However, the connection between the activity of SMBHs and the ejection of gas from the entire galaxy is poorly understood. Observational studies, including our research, are essential to clarify how the central SMBH can influence the evolution of its entire host galaxy and prove key theoretical concepts in the field of astrophysics.”

Titled “The impact of low luminosity AGN on their host galaxies: A radio and optical investigation of the kpc-scale outflow in MaNGA 1-166919,” the paper has been published in Astronomical Journal. Its findings outline gas ejection mechanisms, outflow properties, and how they are related to the activity of the supermassive black hole (SMBH) at the center of the host galaxy.

To that end, the paper presents a detailed optical and radio study of the MaNGA 1-166919 galaxy, which appears to have an Active Galactic Nucleus (AGN). Radio morphology shows two lobes (jets) emanating from the center of the galaxy, a clear sign of AGN activity that could be driving the optical outflow. By measuring the outflow properties, the NYUAD researchers documented how the extent of the optical outflow matches the extent of radio emission.

MzLS Image Isophotes

Superposition of optical z-band MzLS image isophotes (gray color) and our highest spatial resolution radio image in S band (in blue). Optical image has a spatial resolution of 0:0084, while S-band radio data { 0:009. Credit: NYU Abu Dhabi

Al Yazeedi is a member of NYUAD’s Kawader program, a national capacity-building research fellowship that allows outstanding graduates to gain experience in cutting-edge academic research. The three-year, individually tailored, intensive program is designed for graduates considering a graduate degree or a career in research.

Her paper adds to the growing body of UAE space research and activities. The UAE has sent an Emirati into space, a spacecraft around Mars, and recently announced plans to send a robotic rover to the Moon in 2022, ahead of the ultimate goal to build a city on Mars by 2117.

GMOS Outflow Map

The above figure is a GMOS outflow map with radio contours overlaid in black. The outflow velocities show a clear spatial separation of “red” and “blue” components. It strongly suggests a biconical outflow and nicely shows the correspondence between the optical outflow and radio emission. Credit: NYU Abu Dhabi

Emirati women are playing a key role in the research and development behind these projects. The Mars Hope probe science team, which is 80 percent female, was led by Sarah Al Amiri, Minister of State for Advanced Sciences and chairperson of the country’s space agency.

Reference: “The impact of low luminosity AGN on their host galaxies: A radio and optical investigation of the kpc-scale outflow in MaNGA 1-166919” 3 August 2021, Astronomical Journal.
DOI: 10.3847/1538-4357/abf5e1

New Algorithm Flies Drones Faster Than World-Class Human Racing Pilots
New Algorithm Flies Drones Faster Than World-Class Human Racing Pilots
Drone Flying Through Smoke

A drone flying through smoke to visualize the complex aerodynamic effects. Credit: Robotics and Perception Group, University of Zurich

To be useful, drones need to be quick. Because of their limited battery life, they must complete whatever task they have — searching for survivors on a disaster site, inspecting a building, delivering cargo — in the shortest possible time. And they may have to do it by going through a series of waypoints like windows, rooms, or specific locations to inspect, adopting the best trajectory and the right acceleration or deceleration at each segment.

Algorithm outperforms professional pilots

The best human drone pilots are very good at doing this and have so far always outperformed autonomous systems in drone racing. Now, a research group at the University of Zurich (UZH) has created an algorithm that can find the quickest trajectory to guide a quadrotor — a drone with four propellers — through a series of waypoints on a circuit. “Our drone beat the fastest lap of two world-class human pilots on an experimental race track,” says Davide Scaramuzza, who heads the Robotics and Perception Group at UZH and the Rescue Robotics Grand Challenge of the NCCR Robotics, which funded the research.

“The novelty of the algorithm is that it is the first to generate time-optimal trajectories that fully consider the drones’ limitations,” says Scaramuzza. Previous works relied on simplifications of either the quadrotor system or the description of the flight path, and thus they were sub-optimal. “The key idea is, rather than assigning sections of the flight path to specific waypoints, that our algorithm just tells the drone to pass through all waypoints, but not how or when to do that,” adds Philipp Foehn, PhD student and first author of the paper.

External cameras provide position information in real-time

The researchers had the algorithm and two human pilots fly the same quadrotor through a race circuit. They employed external cameras to precisely capture the motion of the drones and — in the case of the autonomous drone — to give real-time information to the algorithm on where the drone was at any moment. To ensure a fair comparison, the human pilots were given the opportunity to train on the circuit before the race. But the algorithm won: all its laps were faster than the human ones, and the performance was more consistent. This is not surprising, because once the algorithm has found the best trajectory it can reproduce it faithfully many times, unlike human pilots.

Before commercial applications, the algorithm will need to become less computationally demanding, as it now takes up to an hour for the computer to calculate the time-optimal trajectory for the drone. Also, at the moment, the drone relies on external cameras to compute where it was at any moment. In future work, the scientists want to use onboard cameras. But the demonstration that an autonomous drone can in principle fly faster than human pilots is promising. “This algorithm can have huge applications in package delivery with drones, inspection, search and rescue, and more,” says Scaramuzza.

Reference: 21 July 2021, Science Robotics.
DOI: 10.1126/scirobotics.abh1221

Planetary Shields Will Buckle Under Furious Stellar Winds From Their Dying Stars – Nearly Impossible for Life To Survive
Planetary Shields Will Buckle Under Furious Stellar Winds From Their Dying Stars – Nearly Impossible for Life To Survive
Material Ejected From Sun Earth's Magnetosphere

When the Sun evolves to become a red giant star, the Earth may be swallowed by our star’s atmosphere, and with a much more unstable solar wind, even the resilient and protective magnetospheres of the giant outer planets may be stripped away. Credit: MSFC / NASA

Any life identified on planets orbiting white dwarf stars almost certainly evolved after the star’s death, says a new study led by the University of Warwick that reveals the consequences of the intense and furious stellar winds that will batter a planet as its star is dying. The research is published in Monthly Notices of the Royal Astronomical Society, and lead author Dr. Dimitri Veras presented it today (July 21, 2021) at the online National Astronomy Meeting (NAM 2021).

The research provides new insight for astronomers searching for signs of life around these dead stars by examining the impact that their winds will have on orbiting planets during the star’s transition to the white dwarf stage. The study concludes that it is nearly impossible for life to survive cataclysmic stellar evolution unless the planet has an intensely strong magnetic field — or magnetosphere — that can shield it from the worst effects.

In the case of Earth, solar wind particles can erode the protective layers of the atmosphere that shield humans from harmful ultraviolet radiation. The terrestrial magnetosphere acts like a shield to divert those particles away through its magnetic field. Not all planets have a magnetosphere, but Earth’s is generated by its iron core, which rotates like a dynamo to create its magnetic field.

“We know that the solar wind in the past eroded the Martian atmosphere, which, unlike Earth, does not have a large-scale magnetosphere. What we were not expecting to find is that the solar wind in the future could be as damaging even to those planets that are protected by a magnetic field”, says Dr Aline Vidotto of Trinity College Dublin, the co-author of the study.

All stars eventually run out of available hydrogen that fuels the nuclear fusion in their cores. In the Sun the core will then contract and heat up, driving an enormous expansion of the outer atmosphere of the star into a ‘red giant’. The Sun will then stretch to a diameter of tens of millions of kilometers, swallowing the inner planets, possibly including the Earth. At the same time the loss of mass in the star means it has a weaker gravitational pull, so the remaining planets move further away.

During the red giant phase, the solar wind will be far stronger than today, and it will fluctuate dramatically. Veras and Vidotto modeled the winds from 11 different types of stars, with masses ranging from one to seven times the mass of our Sun.

Their model demonstrated how the density and speed of the stellar wind, combined with an expanding planetary orbit, conspires to alternatively shrink and expand the magnetosphere of a planet over time. For any planet to maintain its magnetosphere throughout all stages of stellar evolution, its magnetic field needs to be at least one hundred times stronger than Jupiter’s current magnetic field.

The process of stellar evolution also results in a shift in a star’s habitable zone, which is the distance that would allow a planet to be the right temperature to support liquid water. In our solar system, the habitable zone would move from about 150 million km from the Sun — where Earth is currently positioned — up to 6 billion km, or beyond Neptune. Although an orbiting planet would also change position during the giant branch phases, the scientists found that the habitable zone moves outward more quickly than the planet, posing additional challenges to any existing life hoping to survive the process.

Eventually, the red giant sheds its entire outer atmosphere, leaving behind the dense hot white dwarf remnant. These do not emit stellar winds, so once the star reaches this stage the danger to surviving planets has passed.

Dr. Veras said: “This study demonstrates the difficulty of a planet maintaining its protective magnetosphere throughout the entirety of the giant branch phases of stellar evolution.”

“One conclusion is that life on a planet in the habitable zone around a white dwarf would almost certainly develop during the white dwarf phase unless that life was able to withstand multiple extreme and sudden changes in its environment.”

Future missions like the James Webb Space Telescope due to be launched later this year should reveal more about planets that orbit white dwarf stars, including whether planets within their habitable zones show biomarkers that indicate the presence of life, so the study provides valuable context to any potential discoveries.

So far no terrestrial planet that could support life around a white dwarf has been found, but two known gas giants are close enough to their star’s habitable zone to suggest that such a planet could exist. These planets likely moved in closer to the white dwarf as a result of interactions with other planets further out.

Dr. Veras adds: “These examples show that giant planets can approach very close to the habitable zone. The habitable zone for a white dwarf is very close to the star because they emit much less light than a Sun-like star. However, white dwarfs are also very steady stars as they have no winds. A planet that’s parked in the white dwarf habitable zone could remain there for billions of years, allowing time for life to develop provided that the conditions are suitable.”

Meeting: Royal Astronomical Society National Astronomy Meeting

A Bug’s Life: Mountains on Neutron Stars May Be Only Fractions of Millimeters Tall
A Bug’s Life: Mountains on Neutron Stars May Be Only Fractions of Millimeters Tall
Neutron Star Artist’s Depiction

Artist’s depiction of a neutron star. Credit: ESO / L. Calçada

New models of neutron stars show that their tallest mountains may be only fractions of millimeters high, due to the huge gravity on the ultra-dense objects. The research is presented today at the National Astronomy Meeting 2021.

Neutron stars are some of the densest objects in the Universe: they weigh about as much as the Sun, yet measure only around 10km across, similar in size to a large city.

Because of their compactness, neutron stars have an enormous gravitational pull around a billion times stronger than the Earth. This squashes every feature on the surface to minuscule dimensions, and means that the stellar remnant is an almost perfect sphere.

Whilst they are billions of times smaller than on Earth, these deformations from a perfect sphere are nevertheless known as mountains. The team behind the work, led by PhD student Fabian Gittins at the University of Southampton, used computational modeling to build realistic neutron stars and subject them to a range of mathematical forces to identify how the mountains are created.

The team also studied the role of the ultra-dense nuclear matter in supporting the mountains, and found that the largest mountains produced were only a fraction of a millimeter tall, one hundred times smaller than previous estimates.

Fabian comments, “For the past two decades, there has been much interest in understanding how large these mountains can be before the crust of the neutron star breaks, and the mountain can no longer be supported.”

Past work has suggested that neutron stars can sustain deviations from a perfect sphere of up to a few parts in one million, implying the mountains could be as large as a few centimeters. These calculations assumed the neutron star was strained in such a way that the crust was close to breaking at every point. However, the new models indicate that such conditions are not physically realistic.

Fabian adds: “These results show how neutron stars truly are remarkably spherical objects. Additionally, they suggest that observing gravitational waves from rotating neutron stars may be even more challenging than previously thought.”

Although they are single objects, due to their intense gravitation, spinning neutron stars with slight deformations should produce ripples in the fabric of spacetime known as gravitational waves. Gravitational waves from rotations of single neutron stars have yet to be observed, although future advances in extremely sensitive detectors such as advanced LIGO and Virgo may hold the key to probing these unique objects.

Amazing New 3D Images of Shark Intestines Show They Function Like Nikola Tesla’s Valve
Amazing New 3D Images of Shark Intestines Show They Function Like Nikola Tesla’s Valve
Pacific Spiny Dogfish Spiral Intestine

A CT scan image of the spiral intestine of a Pacific spiny dogfish shark (Squalus suckleyi). The beginning of the intestine is on the left, and the end is on the right. Credit: Samantha Leigh/California State University Dominguez Hills

Contrary to what popular media portrays, we actually don’t know much about what sharks eat. Even less is known about how they digest their food, and the role they play in the larger ocean ecosystem.

For more than a century, researchers have relied on flat sketches of sharks’ digestive systems to discern how they function — and how what they eat and excrete impacts other species in the ocean. Now, researchers have produced a series of high-resolution, 3D scans of intestines from nearly three dozen shark species that will advance the understanding of how sharks eat and digest their food.

Smooth Dogfish

Three smooth dogfish sharks (Mustelus canis). Credit: Elizabeth Roberts/Wikimedia Commons

“It’s high time that some modern technology was used to look at these really amazing spiral intestines of sharks,” said lead author Samantha Leigh, assistant professor at California State University Dominguez Hills. “We developed a new method to digitally scan these tissues and now can look at the soft tissues in such great detail without having to slice into them.”

The research team from California State University Dominguez Hills, the University of Washington, and University of California, Irvine, published its findings July 21 in the journal Proceedings of the Royal Society B.

CT Scan Dogfish Shark Spiral Intestine

A CT scan image of a dogfish shark spiral intestine, shown from the top looking down. Credit: Samantha Leigh/California State University Dominguez Hills

The researchers primarily used a computerized tomography (CT) scanner at the UW’s Friday Harbor Laboratories to create 3D images of shark intestines, which came from specimens preserved at the Natural History Museum of Los Angeles. The machine works like a standard CT scanner used in hospitals: A series of X-ray images is taken from different angles, then combined using computer processing to create three-dimensional images. This allows researchers to see the complexities of a shark intestine without having to dissect or disturb it.

Pacific Spiny Dogfish

A live Pacific spiny dogfish shark (Squalus suckleyi). Credit: Samantha Leigh/California State University Dominguez Hills

“CT scanning is one of the only ways to understand the shape of shark intestines in three dimensions,” said co-author Adam Summers, a professor based at UW Friday Harbor Labs who has led a worldwide effort to scan the skeletons of fishes and other vertebrate animals. “Intestines are so complex — with so many overlapping layers, that dissection destroys the context and connectivity of the tissue. It would be like trying to understand what was reported in a newspaper by taking scissors to a rolled-up copy. The story just won’t hang together.”

Smooth Dogfish Spiral Intestine

A CT scan image of a smooth dogfish shark (Mustelus canis) spiral intestine, shown
from the top looking down. Credit: Samantha Leigh/California State University Dominguez Hills

From their scans, the researchers discovered several new aspects about how shark intestines function. It appears these spiral-shaped organs slow the movement of food and direct it downward through the gut, relying on gravity in addition to peristalsis, the rhythmic contraction of the gut’s smooth muscle. Its function resembles the one-way valve designed by Nikola Tesla more than a century ago that allows fluid to flow in one direction, without backflow or assistance from any moving parts.

This finding could shed new light on how sharks eat and process their food. Most sharks usually go days or even weeks between eating large meals, so they rely on being able to hold food in their system and absorb as many nutrients as possible, Leigh explained. The slowed movement of food through their gut caused by the spiral intestine probably allows sharks to retain their food longer, and they also use less energy processing that food.

This video shows the 3D image of a Pacific spiny dogfish (Squalus suckleyi) spiral intestine. Credit: Samantha Leigh/California State University Dominguez Hills

Because sharks are top predators in the ocean and also eat a lot of different things — invertebrates, fish, mammals and even seagrass — they naturally control the biodiversity of many species, the researchers said. Knowing how sharks process what they eat, and how they excrete waste, is important for understanding the larger ecosystem.

“The vast majority of shark species, and the majority of their physiology, are completely unknown. Every single natural history observation, internal visualization and anatomical investigation shows us things we could not have guessed at,” Summers said. “We need to look harder at sharks and, in particular, we need to look harder at parts other than the jaws, and the species that don’t interact with people.”

This video shows the soft tissue of a Pacific spiny dogfish (Squalus suckleyi) spiral intestine, rotated and viewed from different angles. Credit: Samantha Leigh/California State University Dominguez Hills

The authors plan to use a 3D printer to create models of several different shark intestines to test how materials move through the structures in real time. They also hope to collaborate with engineers to use shark intestines as inspiration for industrial applications such as wastewater treatment or filtering microplastics out of the water column.

Reference: 20 July 2021, Proceedings of the Royal Society B.
DOI: 10.1098/rspb.2021.1359

Other co-authors on the paper are Donovan German of University of California, Irvine, and Sarah Hoffmann of Applied Biological Services.

This research was funded by Friday Harbor Laboratories, the UC Irvine OCEANS Graduate Research Fellowship, the Newkirk Center Graduate Research Fellowship, the National Science Foundation Graduate Research Fellowship Program and UC Irvine.

Snapshots of Ultrafast Switching in Quantum Electronics Could Lead to Faster Computing Devices
Snapshots of Ultrafast Switching in Quantum Electronics Could Lead to Faster Computing Devices
Capturing Ultrafast Atomic Motions Inside Tiny Switches

A team of researchers created a new method to capture ultrafast atomic motions inside the tiny switches that control the flow of current in electronic circuits. Pictured here are Aditya Sood (left) and Aaron Lindenberg (right). Credit: Greg Stewart/SLAC National Accelerator Laboratory

Scientists Take First Snapshots of Ultrafast Switching in a Quantum Electronic Device

They discover a short-lived state that could lead to faster and more energy-efficient computing devices.

 Electronic circuits that compute and store information contain millions of tiny switches that control the flow of electric current. A deeper understanding of how these tiny switches work could help researchers push the frontiers of modern computing.

Now scientists have made the first snapshots of atoms moving inside one of those switches as it turns on and off. Among other things, they discovered a short-lived state within the switch that might someday be exploited for faster and more energy-efficient computing devices.

The research team from the Department of Energy’s SLAC National Accelerator Laboratory, Stanford University, Hewlett Packard Labs, Penn State University and Purdue University described their work in a paper published in Science today (July 15, 2021).

“This research is a breakthrough in ultrafast technology and science,” says SLAC scientist and collaborator Xijie Wang. “It marks the first time that researchers used ultrafast electron diffraction, which can detect tiny atomic movements in a material by scattering a powerful beam of electrons off a sample, to observe an electronic device as it operates.”

Ultrafast Switching Quantum Electronic Device

The team used electrical pulses, shown here in blue, to turn their custom-made switches on and off several times. They timed these electrical pulses to arrive just before the electron pulses produced by SLAC’s ultrafast electron diffraction source MeV-UED, which captured the atomic motions happening inside these switches as they turned on and off. Credit: Greg Stewart/SLAC National Accelerator Laboratory

Capturing the cycle

For this experiment, the team custom-designed miniature electronic switches made of vanadium dioxide, a prototypical quantum material whose ability to change back and forth between insulating and electrically conducting states near room temperature could be harnessed as a switch for future computing. The material also has applications in brain-inspired computing because of its ability to create electronic pulses that mimic the neural impulses fired in the human brain.

The researchers used electrical pulses to toggle these switches back and forth between the insulating and conducting states while taking snapshots that showed subtle changes in the arrangement of their atoms over billionths of a second. Those snapshots, taken with SLAC’s ultrafast electron diffraction camera, MeV-UED, were strung together to create a molecular movie of the atomic motions.

Lead researcher Aditya Sood discusses new research which could lead to a better understanding of how the tiny switches inside electronic circuits work. Credit: Olivier Bonin/SLAC National Accelerator Laboratory

“This ultrafast camera can actually look inside a material and take snapshots of how its atoms move in response to a sharp pulse of electrical excitation,” said collaborator Aaron Lindenberg, an investigator with the Stanford Institute for Materials and Energy Sciences (SIMES) at SLAC and a professor in the Department of Materials Science and Engineering at Stanford University. “At the same time, it also measures how the electronic properties of that material change over time.”

With this camera, the team discovered a new, intermediate state within the material. It is created when the material responds to an electric pulse by switching from the insulating to the conducting state.

“The insulating and conducting states have slightly different atomic arrangements, and it usually takes energy to go from one to the other,” said SLAC scientist and collaborator Xiaozhe Shen. “But when the transition takes place through this intermediate state, the switch can take place without any changes to the atomic arrangement.”

Opening a window on atomic motion

Although the intermediate state exists for only a few millionths of a second, it is stabilized by defects in the material.

To follow up on this research, the team is investigating how to engineer these defects in materials to make this new state more stable and longer lasting. This will allow them to make devices in which electronic switching can occur without any atomic motion, which would operate faster and require less energy.

“The results demonstrate the robustness of the electrical switching over millions of cycles and identify possible limits to the switching speeds of such devices,” said collaborator Shriram Ramanathan, a professor at Purdue. “The research provides invaluable data on microscopic phenomena that occur during device operations, which is crucial for designing circuit models in the future.”

The research also offers a new way of synthesizing materials that do not exist under natural conditions, allowing scientists to observe them on ultrafast timescales and then potentially tune their properties.

“This method gives us a new way of watching devices as they function, opening a window to look at how the atoms move,” said lead author and SIMES researcher Aditya Sood. “It is exciting to bring together ideas from the traditionally distinct fields of electrical engineering and ultrafast science. Our approach will enable the creation of next-generation electronic devices that can meet the world’s growing needs for data-intensive, intelligent computing.”

MeV-UED is an instrument of the LCLS user facility, operated by SLAC on behalf of the DOE Office of Science, who funded this research.

SLAC is a vibrant multiprogram laboratory that explores how the universe works at the biggest, smallest and fastest scales and invents powerful tools used by scientists around the globe. With research spanning particle physics, astrophysics and cosmology, materials, chemistry, bio- and energy sciences and scientific computing, we help solve real-world problems and advance the interests of the nation.

SLAC is operated by Stanford University for the U.S. Department of Energy’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.

Harnessing the Dark Side: Optical Singularities Could Be Used for a Wide Range of Applications
Harnessing the Dark Side: Optical Singularities Could Be Used for a Wide Range of Applications
Heart-Shaped Phase Singularity Sheet

Cross-section of the designed heart-shaped phase singularity sheet. The extended dark region in the center image is a cross-section of the singularity sheet. The phase is undefined on the singularity sheet. Credit: Daniel Lim/Harvard SEAS

When we think about singularities, we tend to think of massive black holes in faraway galaxies or a distant future with runaway AI, but singularities are all around us. Singularities are simply a place where certain parameters are undefined. The North and South Pole, for example, are what’s known as coordinate singularities because they don’t have a defined longitude.

Optical singularities typically occur when the phase of light with a specific wavelength, or color, is undefined. These regions appear completely dark. Today, some optical singularities, including optical vortices, are being explored for use in optical communications and particle manipulation but scientists are just beginning to understand the potential of these systems. The question remains — can we harness darkness like we harnessed light to build powerful, new technologies?

Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new way to control and shape optical singularities. The technique can be used to engineer singularities of many shapes, far beyond simple curved or straight lines. To demonstrate their technique, the researchers created a singularity sheet in the shape of a heart.

Polarization Properties

The singularity engineering procedure was also applied to creating more exotic singularities, such as a polarization singularity sheet. Here, the polarization properties (e.g. polarization azimuth, ellipticity angle, and intensity) of the experimental structured light field is compared to the numerical predictions. Credit: Daniel Lim/Harvard SEAS

“Conventional holography techniques are good at shaping light, but struggle to shape the darkness,” said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and senior author of the paper. “We have demonstrated on-demand singularity engineering, which opens up a vast set of possibilities in wide-ranging fields, from super-resolution microscopy techniques to new atomic and particle traps.”

The research is published in Nature Communications.

Capasso and his team used flat metasurfaces with precisely-shaped nanopillars to shape the singularities.

“The metasurface tilts the wavefront of light in a very precise manner over a surface so that the interference pattern of the transmitted light produces extended regions of darkness,” said Daniel Lim, a graduate student at SEAS and first author of the paper. “This approach allows us to precisely engineer dark regions with remarkably high contrast.”

Metasurfaces Nanopillars Nanofins

Metasurfaces, which are nanostructured surfaces containing shapes like nanopillars (left) and nanofins (right), were employed to realize these singularity structures experimentally. The above image shows scanning electron microscope images of titanium dioxide nanostructures that were used to precisely shape the wavefront of light in producing the singularity sheets. Credit: Daniel Lim/Harvard SEAS

Engineered singularities could be used to trap atoms in dark regions. These singularities could also improve super high-resolution imaging. While light can only be focused to regions about half a wavelength (the diffraction limit) in size, darkness has no diffraction limit, meaning it can be localized to any size. This allows darkness to interact with particles over length scales much smaller than the wavelengths of light. This could be used to provide information on not only the size and the shape of the particles but their orientation as well.

Engineered singularities could extend beyond waves of light to other types of waves.

“You can also engineer dead zones in radio waves or silent zones in acoustic waves,” said Lim. “This research points to the possibility of designing complex topologies in wave physics beyond optics, from electron beams to acoustics.”

Reference: “Engineering phase and polarization singularity sheets” by Soon Wei Daniel Lim, Joon-Suh Park, Maryna L. Meretska, Ahmed H. Dorrah and Federico Capasso, 7 July 2021, Nature Communications.
DOI: 10.1038/s41467-021-24493-y

The Harvard Office of Technology Development has protected the intellectual property relating to this project and is exploring commercialization opportunities.

The research was co-authored by Joon-Suh Park, Maryna L. Meretska, and Ahmed H. Dorrah. It was supported in part by the Air Force Office of Scientific Research under award number FA9550- 19-1-0135 and by the Office of Naval Research (ONR) under award number N00014-20-1-2450.

Increased Cell Phone Data Use Is Negatively Affecting Wi-Fi Performance
Increased Cell Phone Data Use Is Negatively Affecting Wi-Fi Performance
WiFi Signal Urban City

Wi-Fi utilizes a spectrum that is “unlicensed,” which can be utilized by any device or network that follows FCC transmission rules. In recent years, cellular providers have also begun using this spectrum.

University of Chicago researchers find competition between networks decreases performance.

If service becomes slow when you’re trying to send a quick email on your smartphone, you might scroll through your network options and discover how many Wi-Fi networks there are. In fact, this plethora of options is itself the problem. These networks are in competition with one another, limiting the speed at which each can operate.

University of Chicago researchers have demonstrated how this increased network competition could negatively impact internet service for everyday users.

Competition between networks arises when they operate on shared spectrum bands, or frequency ranges for electromagnetic waves. In particular, Wi-Fi utilizes a spectrum that is “unlicensed,” meaning any device or network can utilize this spectrum as long as certain transmission rules mandated by the Federal Communications Commission (FCC) are followed.

“The unlicensed spectrum is literally a free-for-all; anybody can use it, within the bounds set by FCC,” explained Monisha Ghosh, associate member in the Department of Computer Science at the University of Chicago and research professor in the Pritzker School of Molecular Engineering.

Cellular phone service mostly relies on an entirely separate band of spectra, which providers license from the FCC through spectrum auctions, though that has shifted with the growing demand for cellular data and limited bandwidths.

When a cellular provider, such as T-Mobile or AT&T, licenses a spectrum band from the FCC, they reserve its exclusive use. As a result, networks operating on licensed bands experience little interference. This allows providers to establish fast and reliable service, but it comes at a cost.

LAA Station

LAA stations like this one on the UChicago campus allow cellular providers to access the same shared spectrum bands used for Wi-Fi. Credit: Photo courtesy of Monisha Ghosh

“In the last five years, the cellular world has grown in its number of users and the amount of data they need,” Ghosh said. “Cellular service providers have begun running short on spectrum, and the licensed spectrum costs billions.”

To improve bandwidth without breaking the bank, these providers have begun to also use the unlicensed spectrum via cellular networks using a mode called licensed assisted access (LAA), which operates on the same bands used for Wi-Fi. Ghosh’s group set out to examine how this shared use of the unlicensed spectrum, called coexistence, impacted both Wi-Fi and cellular users.

“We actually found an LAA station located on the UChicago campus, on a pole in front of the bookstore, and in this outside space campus Wi-Fi is also in use,” Ghosh said. “That provided an experimental platform in our backyard, so we started taking measurements.”

Sitting on a bench outside the bookstore, computer science graduate student Muhammad Rochman and postdoctoral researcher Vanlin Sathya set up five laptops and smartphones, and used these to access the local Wi-Fi networks or connect to LAA using cellular data. They also accessed different types of data to vary the demand on these networks, from low (such as accessing text on a website) to high (streaming a video). Each device was equipped with applications that allowed the group to measure the quality of each network connection.

By accessing multiple networks simultaneously, the group found that competition decreased performance—reducing the amount of data transmitted, the speed of transmission, and the signal quality.

“If everybody starts using the spectrum at the same time, it creates interference and no one’s information gets through.”

Research Prof. Monisha Ghosh

This competition was particularly detrimental to Wi-Fi. When LAA was also in active use, data transmitted by Wi-Fi users decreased up to 97%. Conversely, LAA data only exhibited a 35% decrease when Wi-FI was also in use.

Ghosh explained that the incompatibility between Wi-Fi and LAA owes in part to the different protocols each employs to deal with heavy internet traffic.

“If everybody starts using the spectrum at the same time, it creates interference and no one’s information gets through,” Ghosh said. “But Wi-Fi and cellular have developed very different mechanisms for dealing with this issue.”

Because Wi-Fi depends exclusively on unlicensed spectra, it uses a protocol tailored towards unpredictable demand. This protocol, called listen-before-talk, mimics the interactions of a group of polite party-goers. Participants listen and wait for a gap in the conversation to speak. If two people start talking at once, one politely backs off to let the other speak, then chimes in afterwards. Similarly, if multiple Wi-Fi users attempt to access the network at once, each is assigned a brief wait time, and randomness among these wait times reduces the probability of collision.

Muhammad Rochman Wi-Fi Test

Graduate student Muhammad Rochman uses multiple devices to test performance of local Wi-Fi networks and LAA. Credit: Photo courtesy of Monisha Ghosh

In contrast, cellular providers can predict demand based on cellular access, and so assign each user a specific transmission time. Thus, LAA users are more like speakers in a tightly scheduled colloquia than at an informal party.

This difference in protocols posed little problem when cellular providers were restricted to licensed spectra, but as they’ve moved to the unlicensed spectra, the channel access parameters LAA employs makes it difficult for Wi-Fi users to get equal access to the medium. In spite of the fact that LAA recently modified the tight scheduling used in cellular bands to implement listen-before-talk, it still operates with different parameters. One crucial difference is how long each system holds the medium once it gains access: LAA can transmit for up to 10 milliseconds, whereas Wi-Fi transmissions are only up to 4 milliseconds long.

Competition in the shared spectrum occurs not just between Wi-Fi and cellular providers, but also within each network type.

“In our experiments, we compare Wi-Fi/Wi-Fi coexistence with Wi-Fi/LAA coexistence,” Ghosh said. “Wi-Fi/Wi-Fi coexistence isn’t too bad because of the listen-before-talk procedure, so we used this as a standard of fairness. But Wi-Fi/LAA behaves worse, and we were surprised by how much worse.”

In future studies, she hopes to also examine how LAA networks supplied by different cellular providers compete with one another.

Additionally, Ghosh has advised regulatory agencies on how to better align network protocols based on her research.

“These changes have resulted in better coexistence and better sharing mechanisms. But there’s still a long way to go,” she said.

Reference: “Hidden-nodes in coexisting LAA & Wi-Fi: a measurement study of real deployments” by Vanlin Sathya, Muhammad Iqbal Rochman and Monisha Ghosh.
PDF

The research was presented on June 18, 2021, at the IEEE International Conference on Communications (ICC 2021) Workshop on Spectrum Sharing.

Recycling Lost Energy: Quantum Laser Turns Energy Loss Into Gain?
Recycling Lost Energy: Quantum Laser Turns Energy Loss Into Gain?
Exciton-Polaritonic PT Symmetry

Exciton-polaritonic PT symmetry: Direct coupling between upward- and downward-polariton modes in a six-fold symmetric microcavity with loss manipulation leads to PT-symmetry breaking with low-threshold phase transition. Credit: KAIST

A new laser that generates quantum particles can recycle lost energy for highly efficient, low threshold laser applications.

Scientists at KAIST have fabricated a laser system that generates highly interactive quantum particles at room temperature. Their findings, published in the journal Nature Photonics, could lead to a single microcavity laser system that requires lower threshold energy as its energy loss increases.

The system, developed by KAIST physicist Yong-Hoon Cho and colleagues, involves shining light through a single hexagonal-shaped microcavity treated with a loss-modulated silicon nitride substrate. The system design leads to the generation of a polariton laser at room temperature, which is exciting because this usually requires cryogenic temperatures.

The researchers found another unique and counter-intuitive feature of this design. Normally, energy is lost during laser operation. But in this system, as energy loss increased, the amount of energy needed to induce lasing decreased. Exploiting this phenomenon could lead to the development of high efficiency, low threshold lasers for future quantum optical devices.

“This system applies a concept of quantum physics known as parity-time reversal symmetry,” explains Professor Cho. “This is an important platform that allows energy loss to be used as gain. It can be used to reduce laser threshold energy for classical optical devices and sensors, as well as quantum devices and controlling the direction of light.”

The key is the design and materials. The hexagonal microcavity divides light particles into two different modes: one that passes through the upward-facing triangle of the hexagon and another that passes through its downward-facing triangle. Both modes of light particles have the same energy and path but don’t interact with each other. 

However, the light particles do interact with other particles called excitons, provided by the hexagonal microcavity, which is made of semiconductors. This interaction leads to the generation of new quantum particles called polaritons that then interact with each other to generate the polariton laser. By controlling the degree of loss between the microcavity and the semiconductor substrate, an intriguing phenomenon arises, with the threshold energy becoming smaller as energy loss increases.

Reference: “Room-temperature polaritonic non-Hermitian system with single microcavity” by Hyun Gyu Song, Minho Choi, Kie Young Woo, Chung Hyun Park and Yong-Hoon Cho, 10 June 2021, Nature Photonics.
DOI: 10.1038/s41566-021-00820-z

This research was supported by the Samsung Science and Technology Foundation and Korea’s National Research Foundation.

NASA's NEOWISE Asteroid-Hunting Space Telescope Gets Two-Year Mission Extension
NASA’s NEOWISE Asteroid-Hunting Space Telescope Gets Two-Year Mission Extension
Wide field Infrared Survey Explorer

Artist’s concept of NASA’s WISE (Wide-field Infrared Survey Explorer) spacecraft, which was an infrared-wavelength astronomical space telescope active from December 2009 to February 2011. In September 2013 the spacecraft was assigned a new mission as NEOWISE to help find near-Earth asteroids and comets. Credits: NASA/JPL-Caltech

NEOWISE has provided an estimate of the size of over 1,850 near-Earth objects, helping us better understand our nearest solar system neighbors.

For two more years, NASA’s Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) will continue its hunt for asteroids and comets – including objects that could pose a hazard to Earth. This mission extension means NASA’s prolific near-Earth object (NEO) hunting space telescope will continue operations until June 2023.

“At NASA, we’re always looking up, surveying the sky daily to find potential hazards and exploring asteroids to help unlock the secrets of the formation of our solar system,” said NASA Administrator Bill Nelson. “Using ground-based telescopes, over 26,000 near-Earth asteroids have already been discovered, but there are many more to be found. We’ll enhance our observations with space-based capabilities like NEOWISE and the future, much more capable NEO Surveyor to find the remaining unknown asteroids more quickly and identify potentially-hazardous asteroids and comets before they are a threat to us here on Earth.”

Originally launched as the Wide-field Infrared Survey Explorer (WISE) mission in December 2009, the space telescope surveyed the entire sky in infrared wavelengths, detecting asteroids, dim stars, and some of the faintest galaxies visible in deep space. WISE completed its primary mission when it depleted its cryogenic coolant and it was put into hibernation in February 2011. Observations resumed in December 2013 when the space telescope was repurposed by NASA’s Planetary Science Division as “NEOWISE” to identify asteroids and comets throughout the solar system, with special attention to those that pass close to Earth’s orbit.

“NEOWISE provides a unique and critical capability in our global mission of planetary defense, by allowing us to rapidly measure the infrared emission and more accurately estimate the size of hazardous asteroids as they are discovered,” said Lindley Johnson, NASA’s Planetary Defense Officer and head of the Planetary Defense Coordination Office (PDCO) at NASA Headquarters in Washington. “Extending NEOWISE’s mission highlights not only the important work that is being done to safeguard our planet, but also the valuable science that is being collected about the asteroids and comets further out in space.”

Comet NEOWISE Tucson

Comet NEOWISE—which was Discovered on March 27, 2020, by NASA’s Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) mission—captured on July 6, 2020, above the northeast horizon just before sunrise in Tucson. Credit: Vishnu Reddy

As asteroids are heated by the Sun, they warm up and release this heat as faint infrared radiation. By studying this infrared signature, scientists can reveal the size of an asteroid and compare it to the measurements of observations made by optical telescopes on the ground. This information can help us understand how reflective its surface is, while also providing clues as to its composition.

To date, NEOWISE has provided an estimate of the size of over 1,850 NEOs, helping us better understand our nearest solar system neighbors. As of March 2021, the mission has made 1,130,000 confirmed infrared observations of approximately 39,100 objects throughout the solar system since its restart in 2013. Mission data is shared freely by the IPAC/Caltech-led archive and the data has contributed to over 1,600 peer-reviewed studies. The University of Arizona is also a key partner of the NEOWISE mission as the home institution of the NEOWISE principal investigator, Amy Mainzer, who is a professor of planetary science at the University’s Lunar and Planetary Laboratory.

Among its many accomplishments after its reactivation, NEOWISE also discovered Comet NEOWISE, which was named after the mission and dazzled observers worldwide in 2020.

NEOWISE’s replacement, the next-generation NEO Surveyor, is currently scheduled to launch in 2026, and will greatly expand on what we have learned, and continue to learn, from NEOWISE.

“NEOWISE has taught us a lot about how to find, track, and characterize Earth-approaching asteroids and comets using a space-based infrared telescope,” said Mainzer. “The mission serves as an important precursor for carrying out a more comprehensive search for these objects using the new telescope we’re building, the NEO Surveyor.” Mainzer is also the lead of the NEO Surveyor mission.

The NEOWISE project is managed by NASA’s Jet Propulsion Laboratory in Southern California, a division of Caltech, and the University of Arizona, supported by NASA’s PDCO.

Protein “Big Bang” Reveals Molecular Makeup for Medicine and Bioengineering Applications
Protein “Big Bang” Reveals Molecular Makeup for Medicine and Bioengineering Applications
Gustavo Caetano-Anollés Surrounded by Depiction of Molecular Networks

Research by Gustavo Caetano-Anollés and Fayez Aziz, University of Illinois, reveals a “big bang” during evolution of protein subunits known as domains. The team looked for protein relationships and domain recruitment into proteins over 3.8 billion years across all taxonomic units. Their results could have implications for vaccine development and disease management. Credit: Fred Zwicky, University of Illinois

Proteins have been quietly taking over our lives since the COVID-19 pandemic began. We’ve been living at the whim of the virus’s so-called “spike” protein, which has mutated dozens of times to create increasingly deadly variants. But the truth is, we have always been ruled by proteins. At the cellular level, they’re responsible for pretty much everything.

Proteins are so fundamental that DNA – the genetic material that makes each of us unique – is essentially just a long sequence of protein blueprints. That’s true for animals, plants, fungi, bacteria, archaea, and even viruses. And just as those groups of organisms evolve and change over time, so too do proteins and their component parts.

A new study from University of Illinois researchers, published in Scientific Reports, maps the evolutionary history and interrelationships of protein domains, the subunits of protein molecules, over 3.8 billion years.

“Knowing how and why domains combine in proteins during evolution could help scientists understand and engineer the activity of proteins for medicine and bioengineering applications. For example, these insights could guide disease management, such as making better vaccines from the spike protein of COVID-19 viruses,” says Gustavo Caetano-Anollés, professor in the Department of Crop Sciences, affiliate of the Carl R. Woese Institute for Genomic Biology at Illinois, and senior author on the paper.

Caetano-Anollés has studied the evolution of COVID mutations since the early stages of the pandemic, but that timeline represents a vanishingly tiny fraction of what he and doctoral student Fayez Aziz took on in their current study.

The researchers compiled sequences and structures of millions of protein sequences encoded in hundreds of genomes across all taxonomic groups, including higher organisms and microbes. They focused not on whole proteins, but instead on structural domains.

“Most proteins are made of more than one domain. These are compact structural units, or modules, that harbor specialized functions,” Caetano-Anollés says. “More importantly, they are the units of evolution.”

After sorting proteins into domains to build evolutionary trees, they set to work building a network to understand how domains have developed and been shared across proteins throughout billions of years of evolution.

“We built a time series of networks that describe how domains have accumulated and how proteins have rearranged their domains through evolution. This is the first time such a network of ‘domain organization’ has been studied as an evolutionary chronology,” Fayez Aziz says. “Our survey revealed there is a vast evolving network describing how domains combine with each other in proteins.”

Each link of the network represents a moment when a particular domain was recruited into a protein, typically to perform a new function.

“This fact alone strongly suggests domain recruitment is a powerful force in nature,” Fayez Aziz says. The chronology also revealed which domains contributed important protein functions. For example, the researchers were able to trace the origins of domains responsible for environmental sensing as well as secondary metabolites, or toxins used in bacterial and plant defenses.

The analysis showed domains started to combine early in protein evolution, but there were also periods of explosive network growth. For example, the researchers describe a “big bang” of domain combinations 1.5 billion years ago, coinciding with the rise of multicellular organisms and eukaryotes, organisms with membrane-bound nuclei that include humans.

The existence of biological big bangs is not new. Caetano-Anollés’ team previously reported the massive and early origin of metabolism, and they recently found it again when tracking the history of metabolic networks.

The historical record of a big bang describing the evolutionary patchwork of proteins provides new tools to understand protein makeup.

“This could help identify, for example, why structural variations and genomic recombinations occur often in SARS-CoV-2,” Caetano-Anollés says.

He adds that this new way of understanding proteins could help prevent pandemics by dissecting how virus diseases originate. It could also help mitigate disease by improving vaccine design when outbreaks occur.

Reference: “Evolution of networks of protein domain organization” by M. Fayez Aziz and Gustavo Caetano-Anollés, 8 June 2021, Scientific Reports.
DOI: 10.1038/s41598-021-90498-8

The work was supported by the National Science Foundation and the U.S. Department of Agriculture.

The Department of Crop Sciences is in the College of Agricultural, Consumer and Environmental Sciences at the University of Illinois.

Deep Space Atomic Clock to Improve GPS, Increase Spacecraft Autonomy
Deep Space Atomic Clock to Improve GPS, Increase Spacecraft Autonomy
Deep Space Atomic Clock Illustration

NASA’s Deep Space Atomic Clock has been operating aboard the General Atomics Orbital Test Bed satellite since June 2019. This illustration shows the spacecraft in Earth orbit. Credit: General Atomics Electromagnetic Systems

Designed to improve navigation for robotic explorers and the operation of GPS satellites, the technology demonstration reports a significant milestone.

Spacecraft that venture beyond our Moon rely on communication with ground stations on Earth to figure out where they are and where they’re going. NASA’s Deep Space Atomic Clock is working toward giving those far-flung explorers more autonomy when navigating. In a new paper published on June 30, 2021, in the journal Nature, the mission reports progress in their work to improve the ability of space-based atomic clocks to measure time consistently over long periods.

Known as stability, this feature also impacts the operation of GPS satellites that help people navigate on Earth, so this work also has the potential to increase the autonomy of next-generation GPS spacecraft.

> Related: What Is an Atomic Clock?

To calculate the trajectory of a distant spacecraft, engineers send signals from the spacecraft to Earth and back. They use refrigerator-size atomic clocks on the ground to log the timing of those signals, which is essential for precisely measuring the spacecraft’s position. But for robots on Mars or more distant destinations, waiting for the signals to make the trip can quickly add up to tens of minutes or even hours.

If those spacecraft carried atomic clocks, they could calculate their own position and direction, but the clocks would have to be highly stable. GPS satellites carry atomic clocks to help us get to our destinations on Earth, but those clocks require updates several times a day to maintain the necessary level of stability. Deep space missions would require more stable space-based clocks.

Deep Space Atomic Clock General Atomics Electromagnetic Systems Orbital Test Bed

A glimpse of the Deep Space Atomic Clock in the middle bay of the General Atomics Electromagnetic Systems Orbital Test Bed spacecraft. Credit: NASA

Managed by NASA’s Jet Propulsion Laboratory in Southern California, the Deep Space Atomic Clock has been operating aboard General Atomic’s Orbital Test Bed spacecraft since June 2019. The new study reports that the mission team has set a new record for long-term atomic clock stability in space, reaching more than 10 times the stability of current space-based atomic clocks, including those on GPS satellites.

When Every Nanosecond Counts

All atomic clocks have some degree of instability that leads to an offset in the clock’s time versus the actual time. If not corrected, the offset, while minuscule, increases rapidly, and with spacecraft navigation, even a tiny offset could have drastic effects.

One of the key goals of the Deep Space Atomic Clock mission was to measure the clock’s stability over longer and longer periods, to see how it changes with time. In the new paper, the team reports a level of stability that leads to a time deviation of less than four nanoseconds after more than 20 days of operation.

“As a general rule, an uncertainty of one nanosecond in time corresponds to a distance uncertainty of about one foot,” said Eric Burt, an atomic clock physicist for the mission at JPL and co-author of the new paper. “Some GPS clocks must be updated several times a day to maintain this level of stability, and that means GPS is highly dependent on communication with the ground. The Deep Space Atomic Clock pushes this out to a week or more, thus potentially giving an application like GPS much more autonomy.”

The stability and subsequent time delay reported in the new paper is about five times better than what the team reported in the spring of 2020. This does not represent an improvement in the clock itself, but in the team’s measurement of the clock’s stability. Longer operating periods and almost a full year of additional data made it possible to improve the precision of their measurement.

NASA's Deep Space Atomic Clock

NASA’s Deep Space Atomic Clock could revolutionize deep space navigation. One key requirement for the technology demonstration was a compact design. The complete hardware package is shown here and is only about 10 inches (25 centimeters) on each side. Credit: NASA/JPL-Caltech

The Deep Space Atomic Clock mission will conclude in August, but NASA announced that work on this technology continues: the Deep Space Atomic Clock-2, an improved version of the cutting-edge timekeeper, will fly on the VERITAS (short for Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy) mission to Venus. Like its predecessor, the new space clock is a technology demonstration, meaning its goal is to advance in-space capabilities by developing instruments, hardware, software, or the like that doesn’t currently exist. Built by JPL and funded by NASA’s Space Technology Mission Directorate (STMD), the ultra-precise clock signal generated with this technology could help enable autonomous spacecraft navigation and enhance radio science observations on future missions.

Deep Space Atomic Clock Heart

A computer-aided design, or CAD, drawing of the linear ion trap of the clock – the “heart” of the Deep Space Atomic Clock’s physics package – is slightly smaller than two rolls of quarters laid side by side. The DSAC project is a small, low-mass atomic clock based on mercury-ion trap technology that will be demonstrated in space, providing unprecedented stability needed for next-generation deep space navigation and radio science. Credit: NASA/JPL

“NASA’s selection of Deep Space Atomic Clock-2 on VERITAS speaks to this technology’s promise,” said Todd Ely, Deep Space Atomic Clock principal investigator and project manager at JPL. “On VERITAS, we aim to put this next generation space clock through its paces and demonstrate its potential for deep space navigation and science.”

Reference: “Demonstration of a trapped-ion atomic clock in space’ by E. A. Burt, J. D. Prestage, R. L. Tjoelker, D. G. Enzer, D. Kuang, D. W. Murphy, D. E. Robison, J. M. Seubert, R. T. Wang and T. A. Ely, 30 June 2021, Nature.
DOI: 10.1038/s41586-021-03571-7

More About the Mission

The Deep Space Atomic Clock is hosted on a spacecraft provided by General Atomics Electromagnetic Systems of Englewood, Colorado. It is sponsored by STMD’s Technology Demonstration Missions program located at NASA’s Marshall Space Flight Center in Huntsville, Alabama, and NASA’s Space Communications and Navigation (SCaN) program within NASA’s Human Exploration and Operations Mission Directorate. JPL manages the project.

Diet Rich in Omega 3 Fatty Acids May Help Reduce Migraine Headaches
Diet Rich in Omega 3 Fatty Acids May Help Reduce Migraine Headaches

Food Rich in Omega 3

Trial provides ‘grounds for optimism’ for many people with persistent headaches and those who care for them.

Eating a diet rich in omega 3 (n-3) fatty acids reduces the frequency of headaches compared with a diet with normal intake of omega 3 and omega 6 (n-6) fatty acids, finds a study published by The BMJ today (June 30, 2021).

Modern industrialized diets tend to be low in omega 3 fatty acids and high in omega 6 fatty acids. These fatty acids are precursors to oxylipins — molecules involved in regulating pain and inflammation.

Oxylipins derived from omega 3 fatty acids are associated with pain-reducing effects, while oxylipins derived from omega 6 fatty acids worsen pain and can provoke migraine. But previous studies evaluating omega 3 fatty acid supplements for migraine have been inconclusive.

So a team of US researchers wanted to find out whether diets rich in omega 3 fatty acids would increase levels of the pain-reducing 17-hydroxydocosahexaenoic acid (17-HDHA) and reduce the frequency and severity of headaches.

Their results are based on 182 patients at the University of North Carolina, USA (88% female; average age 38 years) with migraine headaches on 5-20 days per month who were randomly assigned to one of three diets for 16 weeks.

The control diet included typical levels of omega 3 and omega 6 fatty acids. Both interventional diets raised omega 3 fatty acid intake. One kept omega 6 acid intake the same as the control diet, and the other concurrently lowered omega 6 acid intake.

During the trial, participants received regular dietary counseling and access to online support information. They also completed the headache impact test (HIT-6) — a questionnaire assessing headache impact on quality of life. Headache frequency was assessed daily with an electronic diary.

Over the 16 weeks, both interventional diets increased 17-HDHA levels compared with the control diet, and while HIT-6 scores improved in both interventional groups, they were not statistically significantly different from the control group.

However, headache frequency was statistically significantly decreased in both intervention groups.

The high omega 3 diet was associated with a reduction of 1.3 headache hours per day and two headache days per month. The high omega 3 plus low omega 6 diet group saw a reduction of 1.7 headache hours per day and four headache days per month, suggesting additional benefit from lowering dietary omega-6 fatty acid.

Participants in the intervention groups also reported shorter and less severe headaches compared with those in the control group.

This was a high quality, well designed trial, but the researchers do point to some limitations, such as the difficulty for patients to stick to a strict diet and the fact that most participants were relatively young women so results may not apply to children, older adults, men, or other populations.

“While the diets did not significantly improve quality of life, they produced large, robust reductions in frequency and severity of headaches relative to the control diet,” they write.

“This study provides a biologically plausible demonstration that pain can be treated through targeted dietary alterations in humans. Collective findings suggest causal mechanisms linking n-3 and n-6 fatty acids to [pain regulation], and open the door to new approaches for managing chronic pain in humans,” they conclude.

These results support recommending a high omega 3 diet to patients in clinical practice, says Rebecca Burch at the Brigham and Women’s Hospital, in a linked editorial.

She acknowledges that interpretation of this study’s findings is complex, but points out that trials of recently approved drugs for migraine prevention reported reductions of around 2-2.5 headache days per month compared with placebo, suggesting that a dietary intervention can be comparable or better.

What’s more, many people with migraine are highly motivated and interested in dietary changes, she adds. These findings “take us one step closer to a goal long sought by headache patients and those who care for them: a migraine diet backed up by robust clinical trial results.”

References:

“Dietary alteration of n-3 and n-6 fatty acids for headache reduction in adults with migraine: randomized controlled trial” 30 June 2021, The BMJ.
DOI: 10.1136/bmj.n1448

“Dietary omega 3 fatty acids for migraine” 30 June 2021, The BMJ.
DOI: 10.1136/bmj.n1535

Funding: National Institutes of Health (NIH); National Center for Complementary and Integrative Health (NCCIH)