Faster than light travel is the only way humans could ever get to other stars in a reasonable amount of time. Credit: NASA
The closest star to Earth is Proxima Centauri. It is about 4.25 light-years away, or about 25 trillion miles (40 trillion km). The fastest ever spacecraft, the now-in-space Parker Solar Probe will reach a top speed of 450,000 mph. It would take just 20 seconds to go from Los Angeles to New York City at that speed, but it would take the solar probe about 6,633 years to reach Earth’s nearest neighboring solar system.
If humanity ever wants to travel easily between stars, people will need to go faster than light. But so far, faster-than-light travel is possible only in science fiction.
In Issac Asimov’s Foundation series, humanity can travel from planet to planet, star to star or across the universe using jump drives. As a kid, I read as many of those stories as I could get my hands on. I am now a theoretical physicist and study nanotechnology, but I am still fascinated by the ways humanity could one day travel in space.
Some characters – like the astronauts in the movies “Interstellar” and “Thor” – use wormholes to travel between solar systems in seconds. Another approach – familiar to “Star Trek” fans – is warp drive technology. Warp drives are theoretically possible if still far-fetched technology. Two recent papers made headlines in March when researchers claimed to have overcome one of the many challenges that stand between the theory of warp drives and reality.
But how do these theoretical warp drives really work? And will humans be making the jump to warp speed anytime soon?
This 2-dimensional representation shows the flat, unwarped bubble of spacetime in the center where a warp drive would sit surrounded by compressed spacetime to the right (downward curve) and expanded spacetime to the left (upward curve). Credit: AllenMcC/Wikimedia Commons
Compression and expansion
Physicists’ current understanding of spacetime comes from Albert Einstein’s theory of General Relativity. General Relativity states that space and time are fused and that nothing can travel faster than the speed of light. General relativity also describes how mass and energy warp spacetime – hefty objects like stars and black holes curve spacetime around them. This curvature is what you feel as gravity and why many spacefaring heroes worry about “getting stuck in” or “falling into” a gravity well. Early science fiction writers John Campbell and Asimov saw this warping as a way to skirt the speed limit.
What if a starship could compress space in front of it while expanding spacetime behind it? “Star Trek” took this idea and named it the warp drive.
In 1994, Miguel Alcubierre, a Mexican theoretical physicist, showed that compressing spacetime in front of the spaceship while expanding it behind was mathematically possible within the laws of General Relativity. So, what does that mean? Imagine the distance between two points is 10 meters (33 feet). If you are standing at point A and can travel one meter per second, it would take 10 seconds to get to point B. However, let’s say you could somehow compress the space between you and point B so that the interval is now just one meter. Then, moving through spacetime at your maximum speed of one meter per second, you would be able to reach point B in about one second. In theory, this approach does not contradict the laws of relativity since you are not moving faster than light in the space around you. Alcubierre showed that the warp drive from “Star Trek” was in fact theoretically possible.
Proxima Centauri here we come, right? Unfortunately, Alcubierre’s method of compressing spacetime had one problem: it requires negative energy or negative mass.
This 2–dimensional representation shows how positive mass curves spacetime (left side, blue earth) and negative mass curves spacetime in an opposite direction (right side, red earth). Credit: Tokamac/Wikimedia Commons, CC BY-SA
A negative energy problem
Alcubierre’s warp drive would work by creating a bubble of flat spacetime around the spaceship and curving spacetime around that bubble to reduce distances. The warp drive would require either negative mass – a theorized type of matter – or a ring of negative energy density to work. Physicists have never observed negative mass, so that leaves negative energy as the only option.
To create negative energy, a warp drive would use a huge amount of mass to create an imbalance between particles and antiparticles. For example, if an electron and an antielectron appear near the warp drive, one of the particles would get trapped by the mass and this results in an imbalance. This imbalance results in negative energy density. Alcubierre’s warp drive would use this negative energy to create the spacetime bubble.
But for a warp drive to generate enough negative energy, you would need a lot of matter. Alcubierre estimated that a warp drive with a 100-meter bubble would require the mass of the entire visible universe.
In 1999, physicist Chris Van Den Broeck showed that expanding the volume inside the bubble but keeping the surface area constant would reduce the energy requirements significantly, to just about the mass of the sun. A significant improvement, but still far beyond all practical possibilities.
Bobrick and Martire realized that by modifying spacetime within the bubble in a certain way, they could remove the need to use negative energy. This solution, though, does not produce a warp drive that can go faster than light.
Independently, Lentz also proposed a solution that does not require negative energy. He used a different geometric approach to solve the equations of General Relativity, and by doing so, he found that a warp drive wouldn’t need to use negative energy. Lentz’s solution would allow the bubble to travel faster than the speed of light.
It is essential to point out that these exciting developments are mathematical models. As a physicist, I won’t fully trust models until we have experimental proof. Yet, the science of warp drives is coming into view. As a science fiction fan, I welcome all this innovative thinking. In the words of Captain Picard, things are only impossible until they are not.
Written by Mario Borunda, Associate Professor of Physics, Oklahoma State University.
Zircons studied by the research team, photographed using cathodoluminescence, a technique that allowed the team to visualize the interiors of the crystals using a specialized scanning electron microscope. Dark circles on the zircons are the cavities left by the laser that was used to analyze the age and chemistry of the zircons. Scientists led by Michael Ackerson, a research geologist at the Smithsonian’s National Museum of Natural History, provide new evidence that modern plate tectonics, a defining feature of Earth and its unique ability to support life, emerged roughly 3.6 billion years ago. The study, published May 14 in the journal Geochemical Perspective Letters, uses zircons, the oldest minerals ever found on Earth, to peer back into the planet’s ancient past. The team tested more than 3,500 zircons, each just a couple of human hairs wide, by blasting them with a laser and then measuring their chemical composition with a mass spectrometer. These tests revealed the age and underlying chemistry of each zircon. Of the thousands tested, about 200 were fit for study due to the ravages of the billions of years these minerals endured since their creation. Credit: Michael Ackerson, Smithsonian
Earth’s Oldest Minerals Date Onset of Plate Tectonics to 3.6 Billion Years Ago
Ancient zircons from the jack hills of western Australia hone date of an event that was crucial to making the planet hospitable to life.
Scientists led by Michael Ackerson, a research geologist at the Smithsonian’s National Museum of Natural History, provide new evidence that modern plate tectonics, a defining feature of Earth and its unique ability to support life, emerged roughly 3.6 billion years ago.
Earth is the only planet known to host complex life and that ability is partly predicated on another feature that makes the planet unique: plate tectonics. No other planetary bodies known to science have Earth’s dynamic crust, which is split into continental plates that move, fracture, and collide with each other over eons. Plate tectonics afford a connection between the chemical reactor of Earth’s interior and its surface that has engineered the habitable planet people enjoy today, from the oxygen in the atmosphere to the concentrations of climate-regulating carbon dioxide. But when and how plate tectonics got started has remained mysterious, buried beneath billions of years of geologic time.
The study, published May 14, 2021, in the journal Geochemical Perspectives Letters, uses zircons, the oldest minerals ever found on Earth, to peer back into the planet’s ancient past.
The Jack Hills of Western Australia, where the zircons studied were sampled from 15 grapefruit-sized rocks collected by the research team. Scientists led by Michael Ackerson, a research geologist at the Smithsonian’s National Museum of Natural History, provide new evidence that modern plate tectonics, a defining feature of Earth and its unique ability to support life, emerged roughly 3.6 billion years ago. The study, published May 14 in the journal Geochemical Perspective Letters, uses zircons, the oldest minerals ever found on Earth, to peer back into the planet’s ancient past. Credit: Dustin Trail, University of Rochester
The oldest of the zircons in the study, which came from the Jack Hills of Western Australia, were around 4.3 billion years old — which means these nearly indestructible minerals formed when the Earth itself was in its infancy, only roughly 200 million years old. Along with other ancient zircons collected from the Jack Hills spanning Earth’s earliest history up to 3 billion years ago, these minerals provide the closest thing researchers have to a continuous chemical record of the nascent world.
“We are reconstructing how the Earth changed from a molten ball of rock and metal to what we have today,” Ackerson said. “None of the other planets have continents or liquid oceans or life. In a way, we are trying to answer the question of why Earth is unique, and we can answer that to an extent with these zircons.”
To look billions of years into Earth’s past, Ackerson and the research team collected 15 grapefruit-sized rocks from the Jack Hills and reduced them into their smallest constituent parts — minerals — by grinding them into sand with a machine called a chipmunk. Fortunately, zircons are very dense, which makes them relatively easy to separate from the rest of the sand using a technique similar to gold panning.
A thin, polished slice of a rock collected from the Jack Hills of Western Australia. Using a special microscope equipped with a polarizing lens, the research team was able to examine the intricate internal structure of quartz that makes up the rock, including unique features that allowed them to identify ancient zircons (magenta mineral in the center of the red-outlined inset image in the right photo). Scientists led by Michael Ackerson, a research geologist at the Smithsonian’s National Museum of Natural History, provide new evidence that modern plate tectonics, a defining feature of Earth and its unique ability to support life, emerged roughly 3.6 billion years ago. The study, published May 14 in the journal Geochemical Perspective Letters, uses zircons, the oldest minerals ever found on Earth, to peer back into the planet’s ancient past. To look billions of years into Earth’s past, Ackerson and the research team collected 15 grapefruit-sized rocks from the Jack Hills and reduced them into their smallest constituent parts — minerals — by grinding them into sand with a machine called a chipmunk. Fortunately, zircons are very dense, which makes them relatively easy to separate from the rest of the sand using a technique similar to gold panning. Credit: Michael Ackerson, Smithsonian
The team tested more than 3,500 zircons, each just a couple of human hairs wide, by blasting them with a laser and then measuring their chemical composition with a mass spectrometer. These tests revealed the age and underlying chemistry of each zircon. Of the thousands tested, about 200 were fit for study due to the ravages of the billions of years these minerals endured since their creation.
“Unlocking the secrets held within these minerals is no easy task,” Ackerson said. “We analyzed thousands of these crystals to come up with a handful of useful data points, but each sample has the potential to tell us something completely new and reshape how we understand the origins of our planet.”
A zircon’s age can be determined with a high degree of precision because each one contains uranium. Uranium’s famously radioactive nature and well-quantified rate of decay allow scientists to reverse engineer how long the mineral has existed.
The aluminum content of each zircon was also of interest to the research team. Tests on modern zircons show that high-aluminum zircons can only be produced in a limited number of ways, which allows researchers to use the presence of aluminum to infer what may have been going on, geologically speaking, at the time the zircon formed.
After analyzing the results of the hundreds of useful zircons from among the thousands tested, Ackerson and his co-authors deciphered a marked increase in aluminum concentrations roughly 3.6 billion years ago.
“This compositional shift likely marks the onset of modern-style plate tectonics and potentially could signal the emergence of life on Earth,” Ackerson said. “But we will need to do a lot more research to determine this geologic shift’s connections to the origins of life.”
The line of inference that links high-aluminum zircons to the onset of a dynamic crust with plate tectonics goes like this: one of the few ways for high-aluminum zircons to form is by melting rocks deeper beneath Earth’s surface.
“It’s really hard to get aluminum into zircons because of their chemical bonds,” Ackerson said. “You need to have pretty extreme geologic conditions.”
Ackerson reasons that this sign that rocks were being melted deeper beneath Earth’s surface meant the planet’s crust was getting thicker and beginning to cool, and that this thickening of Earth’s crust was a sign that the transition to modern plate tectonics was underway.
Prior research on the 4 billion-year-old Acasta Gneiss in northern Canada also suggests that Earth’s crust was thickening and causing rock to melt deeper within the planet.
“The results from the Acasta Gneiss give us more confidence in our interpretation of the Jack Hills zircons,” Ackerson said. “Today these locations are separated by thousands of miles, but they’re telling us a pretty consistent story, which is that around 3.6 billion years ago something globally significant was happening.”
This work is part of the museum’s new initiative called Our Unique Planet, a public-private partnership, which supports research into some of the most enduring and significant questions about what makes Earth special. Other research will investigate the source of Earth’s liquid oceans and how minerals may have helped spark life.
Ackerson said he hopes to follow up these results by searching the ancient Jack Hills zircons for traces of life and by looking at other supremely old rock formations to see if they too show signs of Earth’s crust thickening around 3.6 billion years ago.
Reference: “Emergence of peraluminous crustal magmas and implications for the early Earth” by M.R. Ackerson, D. Trail and J. Buettner, 14 May 2021, Geochemical Perspectives Letters. DOI: 10.7185/geochemlet.2114
Funding and support for this research were provided by the Smithsonian and the National Aeronautics and Space Administration (NASA).
Military units like the 780th Military Intelligence Brigade shown here are just one component of U.S. national cyber defense. Credit: Fort George G. Meade
Takeaways:
There are no easy solutions to shoring up U.S. national cyber defenses.
Software supply chains and private sector infrastructure companies are vulnerable to hackers.
Many U.S. companies outsource software development because of a talent shortage, and some of that outsourcing goes to companies in Eastern Europe that are vulnerable to Russian operatives.
U.S. national cyber defense is split between the Department of Defense and the Department of Homeland Security, which leaves gaps in authority.
The ransomware attack on Colonial Pipeline on May 7, 2021, exemplifies the huge challenges the U.S. faces in shoring up its cyber defenses. The private company, which controls a significant component of the U.S. energy infrastructure and supplies nearly half of the East Coast’s liquid fuels, was vulnerable to an all-too-common type of cyber attack. The FBI has attributed the attack to a Russian cybercrime gang. It would be difficult for the government to mandate better security at private companies, and the government is unable to provide that security for the private sector.
Similarly, the SolarWinds hack, one of the most devastating cyber attacks in history, which came to light in December 2020, exposed vulnerabilities in global software supply chains that affect government and private sector computer systems. It was a major breach of national security that revealed gaps in U.S. cyber defenses.
These gaps include inadequate security by a major software producer, fragmented authority for government support to the private sector, blurred lines between organized crime and international espionage, and a national shortfall in software and cybersecurity skills. None of these gaps is easily bridged, but the scope and impact of the SolarWinds attack show how critical controlling these gaps is to U.S. national security.
The SolarWinds breach, likely carried out by a group affiliated with Russia’s FSB security service, compromised the software development supply chain used by SolarWinds to update 18,000 users of its Orion network management product. SolarWinds sells software that organizations use to manage their computer networks. The hack, which allegedly began in early 2020, was discovered only in December when cybersecurity company FireEye revealed that it had been hit by the malware. More worrisome, this may have been part of a broader attack on government and commercial targets in the U.S.
The Biden administration is preparing an executive order that is expected to address these software supply chain vulnerabilities. However, these changes, as important as they are, would probably not have prevented the SolarWinds attack. And preventing ransomware attacks like the Colonial Pipeline attack would require U.S. intelligence and law enforcement to infiltrate every organized cyber criminal group in Eastern Europe.
Supply chains, sloppy security and a talent shortage
The vulnerability of the software supply chain – the collections of software components and software development services companies use to build software products – is a well-known problem in the security field. In response to a 2017 executive order, a report by a Department of Defense-led interagency task force identified “a surprising level of foreign dependence,” workforce challenges, and critical capabilities such as printed circuit board manufacturing that companies are moving offshore in pursuit of competitive pricing. All these factors came into play in the SolarWinds attack.
Vinoth Kumar reported that the password for the software company’s development server was allegedly “solarwinds123,” an egregious violation of fundamental standards of cybersecurity. SolarWinds’ sloppy password management is ironic in light of the Password Management Solution of the Year award the company received in 2019 for its Passportal product.
In a blog post, the company admitted that “the attackers were able to circumvent threat detection techniques employed by both SolarWinds, other private companies, and the federal government.”
The larger question is why SolarWinds, an American company, had to turn to foreign providers for software development. A Department of Defense report about supply chains characterizes the lack of software engineers as a crisis, partly because the education pipeline is not providing enough software engineers to meet demand in the commercial and defense sectors.
There’s also a shortage of cybersecurity talent in the U.S. Engineers, software developers and network engineers are among the most needed skills across the U.S., and the lack of software engineers who focus on the security of software in particular is acute.
Fragmented authority
Though I’d argue SolarWinds has much to answer for, it should not have had to defend itself against a state-orchestrated cyber attack on its own. The 2018 National Cyber Strategy describes how supply chain security should work. The government determines the security of federal contractors like SolarWinds by reviewing their risk management strategies, ensuring that they are informed of threats and vulnerabilities and responding to incidents on their systems.
However, this official strategy split these responsibilities between the Pentagon for defense and intelligence systems and the Department of Homeland Security for civil agencies, continuing a fragmented approach to information security that began in the Reagan era. Execution of the strategy relies on the DOD’s U.S. Cyber Command and DHS’s Cyber and Infrastructure Security Agency. DOD’s strategy is to “defend forward”: that is, to disrupt malicious cyber activity at its source, which proved effective in the runup to the 2018 midterm elections. The Cyber and Infrastructure Security Agency, established in 2018, is responsible for providing information about threats to critical infrastructure sectors.
Neither agency appears to have sounded a warning or attempted to mitigate the attack on SolarWinds. The government’s response came only after the attack. The Cyber and Infrastructure Security Agency issued alerts and guidance, and a Cyber Unified Coordination Group was formed to facilitate coordination among federal agencies.
These tactical actions, while useful, were only a partial solution to the larger, strategic problem. The fragmentation of the authorities for national cyber defense evident in the SolarWinds hack is a strategic weakness that complicates cybersecurity for the government and private sector and invites more attacks on the software supply chain.
A wicked problem
National cyber defense is an example of a “wicked problem,” a policy problem that has no clear solution or measure of success. The Cyberspace Solarium Commission identified many inadequacies of U.S. national cyber defenses. In its 2020 report, the commission noted that “There is still not a clear unity of effort or theory of victory driving the federal government’s approach to protecting and securing cyberspace.”
Many of the factors that make developing a centralized national cyber defense challenging lie outside of the government’s direct control. For example, economic forces push technology companies to get their products to market quickly, which can lead them to take shortcuts that undermine security. Legislation along the lines of the Gramm-Leach-Bliley Act passed in 1999 could help deal with the need for speed in software development. The law placed security requirements on financial institutions. But software development companies are likely to push back against additional regulation and oversight.
The Biden administration appears to be taking the challenge seriously. The president has appointed a national cybersecurity director to coordinate related government efforts. It remains to be seen whether and how the administration will address the problem of fragmented authorities and clarify how the government will protect companies that supply critical digital infrastructure. It’s unreasonable to expect any U.S. company to be able to fend for itself against a foreign nation’s cyberattack.
Steps forward
In the meantime, software developers can apply the secure software development approach advocated by the National Institute of Standards and Technology. Government and industry can prioritize the development of artificial intelligence that can identify malware in existing systems. All this takes time, however, and hackers move quickly.
Finally, companies need to aggressively assess their vulnerabilities, particularly by engaging in more “red teaming” activities: that is, having employees, contractors or both play the role of hackers and attack the company.
Recognizing that hackers in the service of foreign adversaries are dedicated, thorough and not constrained by any rules is important for anticipating their next moves and reinforcing and improving U.S. national cyber defenses. Otherwise, Colonial Pipeline is unlikely to be the last victim of a major attack on U.S. infrastructure and SolarWinds is unlikely to be the last victim of a major attack on the U.S. software supply chain.
Written by Terry Thompson, Adjunct Instructor in Cybersecurity, Johns Hopkins University.
Credit: Contains modified Copernicus Sentinel data (2020), processed by ESA, CC BY-SA 3.0 IGO
The Copernicus Sentinel-2 mission takes us over Qeshm Island – the largest island in Iran.
Qeshm Island lies in the Strait of Hormuz, parallel to the Iranian coast from which it is separated by the Clarence Strait (Khuran). With an area of around 1200 sq km, the island has an irregular outline and shape often compared to that of an arrow. The island is approximately 135 km long and spans around 40 km at its widest point.
The image shows the largely arid land surfaces on both Qeshm Island and mainland Iran. The island generally has a rocky coastline except for the sandy bays and mud flats that fringe the northwest part of the island.
The Hara Forest Protected Area, a network of shallow waterways and forest, can be seen clearly in the image, between Qeshm Island and the mainland. Hara, which means ‘grey mangrove’ in the local language, is a large mangrove forest and protected area that brings more than 150 species of migrating birds during spring, including the great egret and the western reef heron. The forest also hosts sea turtles and aquatic snakes.
The dome-shaped Namakdan mountain is visible in the southwest part of the island and features the Namakdan Cave – one of the longest salt caves in the world. With a length of six kilometers, the cave is filled with salt sculptures, salt rivers, and salt megadomes.
The water south of Qeshm Island appears particularly dark, while lighter, turquoise colors can be seen in the left of the image most likely due to shallow waters and sediment content. Several islands can be seen in the waters including Hengam Island, visible just south of Qeshm, Larak Island and Hormuz Island which is known for its red, edible soil.
Several cloud formations can be seen in the bottom-right of the image, as well as a part of the Musandam Peninsula, the northeastern tip of the Arabian Peninsula. The peninsula’s jagged coastline features fjordlike inlets called ‘khors’ and its waters are home to dolphins and other marine life.
Data from the Copernicus Sentinel-2 mission can help monitor changes in urban expansion, land-cover change and agriculture monitoring. The mission’s frequent revisits over the same area and high spatial resolution also allow changes in inland water bodies to be closely monitored.
Unprecedented high-resolution map of the Orion Nebula Cluster showing newborn stars (orange squares), gravitationally collapsing gas cores (red circles), and non-collapsing gas cores (blue crosses). Credit: Takemura et al.
A survey of star formation activity in the Orion Nebula Cluster found similar mass distributions for newborn stars and dense gas cores, which may evolve into stars. Counterintuitively, this means that the amount of gas a core accretes as it develops, and not the initial mass of the core, is the key factor in deciding the final mass of the produced star.
The Universe is populated with stars of various masses. Dense cores in clouds of interstellar gas collapse under their own gravity to form stars, but what determines the final mass of the star remains an open question. There are two competing theories. In the core-collapse model, larger stars form from larger cores. In the competitive accretion model, all cores start out about the same mass but accrete different amounts of gas from the surroundings as they grow.
To distinguish between these two scenarios, a research team led by Hideaki Takemura at the National Astronomical Observatory of Japan created a map of the Orion Nebula Cluster where new stars are forming, based on data from the American CARMA interferometer and NAOJ’s own Nobeyama 45-m Radio Telescope. Thanks to the unprecedented high resolution of the map, the team was able to compare the masses of the newly formed stars and gravitationally collapsing dense cores. They found that the mass distributions are similar for the two populations. They also found many smaller cores which don’t have strong enough gravity to contract into stars.
One would think that similar mass distributions for prestellar cores and newborn stars would favor the core-collapse model, but actually, because it is impossible for a core to impart all of its mass to a new star, this shows that continued gas inflow is an important factor, favoring the competitive accretion model.
Now the team will expand their map using additional data from CARMA and the Nobeyama 45-m Radio Telescope to see if the results from the Orion Nebula Cluster hold true for other regions.
Reference: “The Core Mass Function in the Orion Nebula Cluster Region: What Determines the Final Stellar Masses?” by Hideaki Takemura, Fumitaka Nakamura, Shuo Kong, Héctor G. Arce, John M. Carpenter, Volker Ossenkopf-Okada, Ralf Klessen, Patricio Sanhueza, Yoshito Shimajiri, Takashi Tsukagoshi, Ryohei Kawabe, Shun Ishii, Kazuhito Dobashi, Tomomi Shimoikura, Paul F. Goldsmith, Álvaro Sánchez-Monge, Jens Kauffmann, Thushara G. S. Pillai, Paolo Padoan, Adam Ginsberg, Rowan J. Smith, John Bally, Steve Mairs, Jaime E. Pineda, Dariusz C. Lis, Blakesley Burkhart, Peter Schilke, Hope How-Huan Chen, Andrea Isella, Rachel K. Friesen, Alyssa A. Goodman and Doyal A. Harper, 22 March 2021, Astrophysical Journal Letters. DOI: 10.3847/2041-8213/abe7dd
Funding: Japan Society for the Promotion of Science, Deutsche Forschungsgemeinschaft, Heidelberg Cluster of Excellence STRUCTURES, European Research Council, Spanish Ministry of Economy and Competitiveness, the State Agency for Research of the Spanish Ministry of Science and Innovation
Cryo-electron microscopy reconstruction of Binjari virus. The projecting spikes are a typical feature of immature flaviviruses such as dengue virus but reveal an unexpected organization. Credit: Associate Professor Fasseli Coulibaly
Better designed vaccines for insect-spread viruses like dengue and Zika now possible.
Better designed vaccines for insect-spread viruses like dengue and Zika are likely after researchers discovered models of immature flavivirus particles were originally misinterpreted.
Researchers from The University of Queensland and Monash University have now determined the first complete 3D molecular structure of the immature flavivirus, revealing an unexpected organization.
UQ researcher Associate Professor Daniel Watterson said the team was studying the insect-specific Binjari virus when they made the discovery.
“We were using Australia’s safe-to-handle Binjari virus, which we combine with more dangerous viral genes to make safer and more effective vaccines,” Dr. Watterson said.
“But when analyzing Binjari we could clearly see that the molecular structure we’ve all been working from since 2008 wasn’t quite correct.
“Imagine trying to build a house when your blueprints are wrong – that’s exactly what it’s like when you’re attempting to build effective vaccines and treatments and your molecular ‘map’ is not quite right.”
The team used a technique known as cryogenic electron microscopy to image the virus, generating high-resolution data from Monash’s Ramaciotti Centre for Cryo-Electron Microscopy facility.
With thousands of collected two-dimensional images of the virus, the researchers then combined them using a high-performance computing platform called ‘MASSIVE’ to construct a high-resolution 3D structure.
Monash’s Associate Professor Fasséli Coulibaly, a co-leader of the study, said the revelation could lead to new and better vaccines for flaviviruses, which have a huge disease burden globally.
“Flaviviruses are globally distributed and dengue virus alone infects around 400 million people annually,” Dr. Coulibaly said.
“They cause a spectrum of potentially severe diseases including hepatitis, vascular shock syndrome, encephalitis, acute flaccid paralysis, congenital abnormalities, and fetal death.
“This structure defines the exact wiring of the immature virus before it becomes infectious, and we now have a better understanding of the levers and pulleys involved in viral assembly.
“This is a continuation of fundamental research by us and others and, without this hard-won basic knowledge, we wouldn’t have the solid foundation needed to design tomorrow’s treatments.”
Reference: “The structure of an infectious immature flavivirus redefines viral architecture and maturation” by Natalee D. Newton, Joshua M. Hardy, Naphak Modhiran, Leon E. Hugo, Alberto A. Amarilla, Summa Bibby, Hariprasad Venugopal, Jessica J. Harrison, Renee J. Traves,§, Roy A. Hall, Jody Hobson-Peters, Fasséli Coulibaly and Daniel Watterson, 14 May 2021, Science Advances. DOI: 10.1126/sciadv.abe4507
The joint first authors are Dr. Natalee Newton from UQ’s Watterson lab and Dr. Joshua Hardy from Coulibaly lab at the Monash Biomedicine Discovery Institute.
Left: This is an image of the star HR 8799 taken by Hubble’s Near Infrared Camera and Multi-Object Spectrometer (NICMOS) in 1998. A mask within the camera (coronagraph) blocks most of the light from the star. Astronomers also used software to digitally subtract more starlight. Nevertheless, scattered light from HR 8799 dominates the image, obscuring four faint planets later discovered from ground-based observations. Right: A re-analysis of NICMOS data in 2011 uncovered three of the exoplanets, which were not seen in the 1998 images. Webb will probe the planets’ atmospheres at infrared wavelengths astronomers have rarely used to image distant worlds. Credit: NASA, ESA, and R. Soummer (STScI)
NASA’s Webb to Study Young Exoplanets on the Edge
Webb will probe the outer realm of exoplanetary systems, investigating known planets and hunting for new worlds.
Although more than 4,000 planets have been discovered around other stars, they don’t represent the wide diversity of possible alien worlds. Most of the exoplanets detected so far are so-called “star huggers”: they orbit so close to their host stars that they complete an orbit in days or weeks. These are the easiest to find with current detection techniques.
But there’s a vast, mostly uncharted landscape to hunt for exoplanets in more distant orbits. Astronomers have only begun to explore this frontier. The planets are far enough away from their stars that telescopes equipped with masks to block out a star’s blinding glare can see the planets directly. The easiest planets to spot are hot, newly formed worlds. They are young enough that they still glow in infrared light with the heat from their formation.
This outer realm of exoplanetary systems is an ideal hunting ground for NASA’s upcoming James Webb Space Telescope. Webb will probe the atmospheres of nearby known exoplanets, such as HR 8799 and 51 Eridani b, at infrared wavelengths. Webb also will hunt for other distant worlds—possibly down to Saturn-size—on the outskirts of planetary systems that cannot be detected with ground-based telescopes.
This schematic shows the positions of the four exoplanets orbiting far away from the nearby star HR 8799. The orbits appear elongated because of a slight tilt of the plane of the orbits relative to our line of sight. The size of the HR 8799 planetary system is comparable to our solar system, as indicated by the orbit of Neptune, shown to scale. Credit: NASA, ESA, and R. Soummer (STScI)
Before planets around other stars were first discovered in the 1990s, these far-flung exotic worlds lived only in the imagination of science fiction writers.
But even their creative minds could not have conceived of the variety of worlds astronomers have uncovered. Many of these worlds, called exoplanets, are vastly different from our solar system’s family of planets. They range from star-hugging “hot Jupiters” to oversized rocky planets dubbed “super Earths.” Our universe apparently is stranger than fiction.
Seeing these distant worlds isn’t easy because they get lost in the glare of their host stars. Trying to detect them is like straining to see a firefly hovering next to a lighthouse’s brilliant beacon.
That’s why astronomers have identified most of the more than 4,000 exoplanets found so far using indirect techniques, such as through a star’s slight wobble or its unexpected dimming as a planet passes in front of it, blocking some of the starlight.
These techniques work best, however, for planets orbiting close to their stars, where astronomers can detect changes over weeks or even days as the planet completes its racetrack orbit. But finding only star-skimming planets doesn’t provide astronomers with a comprehensive picture of all the possible worlds in star systems.
This discovery image of a Jupiter-sized extrasolar planet orbiting the nearby star 51 Eridani was taken in near-infrared light in 2014 by the Gemini Planet Imager. The bright central star is hidden behind a mask in the center of the image to enable the detection of the exoplanet, which is 1 million times fainter than 51 Eridani. The exoplanet is on the outskirts of the planetary system 11 billion miles from its star. Webb will probe the planet’s atmosphere at infrared wavelengths astronomers have rarely used to image distant worlds. Credit: International Gemini Observatory/NOIRLab/NSF/AURA, J. Rameau (University of Montreal), and C. Marois (National Research Council of Canada Herzberg)
Another technique researchers use in the hunt for exoplanets, which are planets orbiting other stars, is one that focuses on planets that are farther away from a star’s blinding glare. Scientists, using specialized imaging techniques that block out the glare from the star, have uncovered young exoplanets that are so hot they glow in infrared light. In this way, some exoplanets can be directly seen and studied.
NASA’s upcoming James Webb Space Telescope will help astronomers probe farther into this bold new frontier. Webb, like some ground-based telescopes, is equipped with special optical systems called coronagraphs, which use masks designed to block out as much starlight as possible to study faint exoplanets and to uncover new worlds.
Two targets early in Webb’s mission are the planetary systems 51 Eridani and HR 8799. Out of the few dozen directly imaged planets, astronomers plan to use Webb to analyze in detail the systems that are closest to Earth and have planets at the widest separations from their stars. This means that they appear far enough away from a star’s glare to be directly observed. The HR 8799 system resides 133 light-years and 51 Eridani 96 light-years from Earth.
Webb’s Planetary Targets
Two observing programs early in Webb’s mission combine the spectroscopic capabilities of the Near Infrared Spectrograph (NIRSpec ) and the imaging of the Near Infrared Camera (NIRCam) and Mid-Infrared Instrument (MIRI) to study the four giant planets in the HR 8799 system. In a third program, researchers will use NIRCam to analyze the giant planet in 51 Eridani.
The four giant planets in the HR 8799 system are each roughly 10 Jupiter masses. They orbit more than 14 billion miles from a star that is slightly more massive than the Sun. The giant planet in 51 Eridani is twice the mass of Jupiter and orbits about 11 billion miles from a Sun-like star. Both planetary systems have orbits oriented face-on toward Earth. This orientation gives astronomers a unique opportunity to get a bird’s-eye view down on top of the systems, like looking at the concentric rings on an archery target.
Many exoplanets found in the outer orbits of their stars are vastly different from our solar system planets. Most of the exoplanets discovered in this outer region, including those in HR 8799, are between 5 and 10 Jupiter masses, making them the most massive planets ever found to date.
These outer exoplanets are relatively young, from tens of millions to hundreds of millions of years old—much younger than our solar system’s 4.5 billion years. So they’re still glowing with heat from their formation. The images of these exoplanets are essentially baby pictures, revealing planets in their youth.
This video shows four Jupiter-sized exoplanets orbiting billions of miles away from their star in the nearby HR 8799 system. The planetary system is oriented face-on toward Earth, giving astronomers a unique bird’s-eye view of the planets’ motion. The exoplanets are orbiting so far away from their star that they take anywhere from decades to centuries to complete an orbit. The video consists of seven images of the system taken over a seven-year period with the W.M. Keck Observatory on Mauna Kea, Hawaii. Keck’s coronagraph blocks out most of the starlight so that the much fainter and smaller exoplanets can be seen. Credit: Jason Wang (Caltech) and Christian Marois (NRC Herzberg)
Webb will probe into the mid-infrared, a wavelength range astronomers have rarely used before to image distant worlds. This infrared “window” is difficult to observe from the ground because of thermal emission from—and absorption in—Earth’s atmosphere.
“Webb’s strong point is the uninhibited light coming through space in the mid-infrared range,” said Klaus Hodapp of the University of Hawaii in Hilo, lead investigator of the NIRSpec observations of the HR 8799 system. “Earth’s atmosphere is pretty difficult to work through. The major absorption molecules in our own atmosphere prevent us from seeing interesting features in planets.”
The mid-infrared “is the region where Webb really will make seminal contributions to understanding what are the particular molecules, what are the properties of the atmosphere that we hope to find which we don’t really get just from the shorter, near-infrared wavelengths,” said Charles Beichman of NASA’s Jet Propulsion Laboratory in Pasadena, California, lead investigator of the NIRCam and MIRI observations of the HR 8799 system. “We’ll build on what the ground-based observatories have done, but the goal is to expand on that in a way that would be impossible without Webb.”
How Do Planets Form?
One of the researchers’ main goals in both systems is to use Webb to help determine how the exoplanets formed. Were they created through a buildup of material in the disk surrounding the star, enriched in heavy elements such as carbon, just as Jupiter probably did? Or, did they form from the collapse of a hydrogen cloud, like a star, and become smaller under the relentless pull of gravity?
Atmospheric makeup can provide clues to a planet’s birth. “One of the things we’d like to understand is the ratio of the elements that have gone into the formation of these planets,” Beichman said. “In particular, carbon versus oxygen tells you quite a lot about where the gas that formed the planet comes from. Did it come from a disk that accreted a lot of the heavier elements or did it come from the interstellar medium? So it’s what we call the carbon-to-oxygen ratio that is quite indicative of formation mechanisms.”
This video shows a Jupiter-sized exoplanet orbiting far away—roughly 11 billion miles—from a nearby, Sun-like star, 51 Eridani. The planetary system is oriented face-on toward Earth, giving astronomers a unique bird’s-eye view of the planet’s motion. The video consists of five images taken over four years with the Gemini South Telescope’s Gemini Planet Imager, in Chile. Gemini’s coronagraph blocks out most of the starlight so that the much fainter and smaller exoplanet can be seen. Credit: Jason Wang (Caltech)/Gemini Planet Imager Exoplanet Survey
To answer these questions, the researchers will use Webb to probe deeper into the exoplanets’ atmospheres. NIRCam, for example, will measure the atmospheric fingerprints of elements like methane. It also will look at cloud features and the temperatures of these planets. “We already have a lot of information at these near-infrared wavelengths from ground-based facilities,” said Marshall Perrin of the Space Telescope Science Institute in Baltimore, Maryland, lead investigator of NIRCam observations of 51 Eridani b. “But the data from Webb will be much more precise, much more sensitive. We’ll have a more complete set of wavelengths, including filling in gaps where you can’t get those wavelengths from the ground.”
The astronomers will also use Webb and its superb sensitivity to hunt for less-massive planets far from their star. “From ground-based observations, we know that these massive planets are relatively rare,” Perrin said. “But we also know that for the inner parts of systems, lower-mass planets are dramatically more common than larger-mass planets. So the question is, does it also hold true for these further separations out?” Beichman added, “Webb’s operation in the cold environment of space allows a search for fainter, smaller planets, impossible to detect from the ground.”
Another goal is understanding how the myriad planetary systems discovered so far were created.
“I think what we are finding is that there is a huge diversity in solar systems,” Perrin said. “You have systems where you have these hot Jupiter planets in very close orbits. You have systems where you don’t. You have systems where you have a 10-Jupiter-mass planet and ones in which you have nothing more massive than several Earths. We ultimately want to understand how the diversity of planetary system formation depends on the environment of the star, the mass of the star, all sorts of other things and eventually through these population-level studies, we hope to place our own solar system in context.”
The NIRSpec spectroscopic observations of HR 8799 and the NIRCam observations of 51 Eridani are part of the Guaranteed Time Observations programs that will be conducted shortly after Webb’s launch later this year. The NIRCam and MIRI observations of HR 8799 is a collaboration of two instrument teams and is also part of the Guaranteed Time Observations program.
The James Webb Space Telescope will be the world’s premier space science observatory when it launches in 2021. Webb will solve mysteries in our solar system, look beyond to distant worlds around other stars, and probe the mysterious structures and origins of our universe and our place in it. Webb is an international program led by NASA with its partners, ESA (European Space Agency) and the Canadian Space Agency.
General view of the cave site of Panga ya Saidi. Note trench excavation where burial was unearthed. Credit: Mohammad Javad Shoaee
The discovery of the earliest human burial site yet found in Africa, by an international team including several CNRS researchers,[1] has just been announced in the journal Nature.
At Panga ya Saidi, in Kenya, north of Mombasa, the body of a three-year-old, dubbed Mtoto (Swahili for ‘child’) by the researchers, was deposited and buried in an excavated pit approximately 78,000 years ago. Through analysis of sediments and the arrangement of the bones, the research team showed that the body had been protected by being wrapped in a shroud made of perishable material, and that the head had likely rested on an object also of perishable material.
3D reconstruction of the arrangement of the child’s remains (top), artistic reconstruction of the burial (bottom). Credit: Jorge González / Elena Santos / F. Fuego / MaxPlanck Institute / CENIEH
Though there are no signs of offerings or ochre, both common at more recent burial sites, the funerary treatment given Mtoto suggests a complex ritual that likely required the active participation of many members of the child’s community.
Though Mtoto was a Homo sapiens, the child’s dental morphology, in contrast with that observed in human remains of the same period, preserves certain archaic traits connecting it to distant African ancestors. This apparently confirms that, as has often been posited in recent years, our species has extremely old and regionally diverse roots in the African continent where it arose.
Notes
Participating CNRS researchers hail from the PACEA (CNRS / University of Bordeaux / French Ministry of Culture) and IRAMAT (CNRS / Université Bourgogne Franche-Comté / University of Orléans / Bordeaux Montaigne University) research units. In France, this research was funded by the LaScArBx Laboratory of Excellence (LabEx).
Reference: “Earliest known human burial in Africa” by María Martinón-Torres, Francesco d’Errico, Elena Santos, Ana Álvaro Gallo, Noel Amano, William Archer, Simon J. Armitage, Juan Luis Arsuaga, José María Bermúdez de Castro, James Blinkhorn, Alison Crowther, Katerina Douka, Stéphan Dubernet, Patrick Faulkner, Pilar Fernández-Colón, Nikos Kourampas, Jorge González García, David Larreina, François-Xavier Le Bourdonnec, George MacLeod, Laura Martín-Francés, Diyendo Massilani, Julio Mercader, Jennifer M. Miller, Emmanuel Ndiema, Belén Notario, Africa Pitarch Martí, Mary E. Prendergast, Alain Queffelec, Solange Rigaud, Patrick Roberts, Mohammad Javad Shoaee, Ceri Shipton, Ian Simpson, Nicole Boivin and Michael D. Petraglia, 5 May 2021, Nature. DOI: 10.1038/s41586-021-03457-8
Richard Whitlock, professor of surgery at McMaster University, performing heart surgery. Credit: Hamilton Health Sciences
Removing left atrial appendage cuts the risk of strokes by more than one-third in patients with atrial fibrillation.
A simple surgery saves patients with heart arrhythmia from often-lethal strokes, says a large international study led by McMaster University.
Researchers found that removing the left atrial appendage — an unused, finger-like tissue that can trap blood in the heart chamber and increase the risk of clots — cuts the risk of strokes by more than one-third in patients with atrial fibrillation.
Even better, the reduced clotting risk comes on top of any other benefits conferred by blood-thinner medications patients with this condition are usually prescribed.
“If you have atrial fibrillation and are undergoing heart surgery, the surgeon should be removing your left atrial appendage, because it is a set-up for forming clots. Our trial has shown this to be both safe and effective for stroke prevention,” said Richard Whitlock, first author of the study.
“This is going to have a positive impact on tens of thousands of patients globally.”
The left atrial appendage, intact. Credit: Heather Borsellino
Whitlock is a scientist at the Population Health Research Institute (PHRI), a joint institute of McMaster University and Hamilton Health Sciences (HHS); a professor of surgery at McMaster, the Canada Research Chair in cardiovascular surgical trials, a cardiac surgeon for HHS, and is supported by a Heart and Stroke Foundation career award.
The co-principal investigator of the study is Stuart Connolly who has also advanced this field by establishing the efficacy and safety of newer blood thinners. He is a professor emeritus of medicine at McMaster, a PHRI senior scientist and an HHS cardiologist.
The left atrial appendage after occlusion. Credit: Heather Borsellino
“The results of this study will change practice right away because this procedure is simple, quick and safe for the 15 percent of heart surgery patients who have atrial fibrillation. This will prevent a great burden of suffering due to stroke,” Connolly said.
The study results were fast tracked into publication by The New England Journal of Medicine and presented at the American College of Cardiology conference today.
The study tracked 4,811 people in 27 countries who are living with atrial fibrillation and taking blood thinners. Consenting patients undertaking cardiopulmonary bypass surgery were randomly selected for the additional left atrial appendage occlusion surgery; their outcomes compared with those who only took medicine. They were all followed for a median of four years.
Stuart Connolly. Credit: Population Health Research Institute
Whitlock said it was suspected since the 1940s that blood clots can form in the left atrial appendage in patients with atrial fibrillation, and it made sense to cut this useless structure off if the heart was exposed for other surgery. This is now proven to be true.
Atrial fibrillation is common in elderly people and is responsible for about 25 percent of ischemic strokes which are caused when blood clots block arteries supplying parts of the brain. The average age of patients in the study was 71.
“In the past all we had was medicine. Now we can treat atrial fibrillation with both medicines and surgery to ensure a much better outcome,” said Whitlock.
He said that the current study tested the procedure during cardiac surgery being undertaken for other reasons, but the procedure can also be done through less invasive methods for patients not having heart surgery. He added that future studies to examine that approach will be important.
Whitlock said the left atrial appendage is a leftover from how a person’s heart forms as an embryo and it has little function later in life.
“This is an inexpensive procedure that is safe, without any long-term adverse effects, and the impact is long-term.”
Reference: 15 May 2021, New England Journal of Medicine.
External funding for the study came from the Canadian Institutes of Health Research and the Heart and Stroke Foundation of Canada.
Figure 1: cosmological simulation of the distant Universe. The image shows the light emitted by hydrogen atoms in the cosmic web in a region roughly 15 million light years across. In addition to the very weak emission from intergalactic gas, a number of point sources can be seen: these are galaxies in the process of forming their first stars. Credit: Jeremy Blaizot / projet SPHINX
In the Universe, galaxies are distributed along extremely tenuous filaments of gas millions of light years long separated by voids, forming the cosmic web.
The MUSE instrument on the Very Large Telescope has captured an image of several filaments in the early Universe…
… revealing the unexpected presence of billions of dwarf galaxies in the filaments
Although the filaments of gas in which galaxies are born have long been predicted by cosmological models, we have so far had no real images of such objects. Now for the first time, several filaments of the ‘cosmic web’ have been directly observed using the MUSE[1] instrument installed on ESO’s Very Large Telescope in Chile. These observations of the early Universe, 1 to 2 billion years after the Big Bang, point to the existence of a multitude of hitherto unsuspected dwarf galaxies. Carried out by an international collaboration led by the Centre de Recherche Astrophysique de Lyon (CNRS/Université Lyon 1/ENS de Lyon), also involving the Lagrange laboratory (CNRS/Université Côte d’Azur/Observatoire de la Côte d’Azur),[2] the study is published in the journal Astronomy & Astrophysics.
Figure 2: the 2250 galaxies in the ‘cone’ of the Universe observed by MUSE are shown here according to the age of the Universe (in billions of years). The period of the early Universe (0.8 to 2.2 billion years after the Big Bang) explored in this study is shown in red. The 22 regions with galaxy over-density are indicated by grey rectangles. The 5 regions where filaments have been identified most prominently are shown in blue. Credit: Roland Bacon / David Mary
The filamentary structure of hydrogen gas in which galaxies form, known as the cosmic web, is one of the major predictions of the model of the Big Bang and of galaxy formation [figure 1]. Until now, all that was known about the web was limited to a few specific regions, particularly in the direction of quasars, whose powerful radiation acts like car headlights, revealing gas clouds along the line of sight. However, these regions are poorly representative of the whole network of filaments where most galaxies, including our own, were born. Direct observation of the faint light emitted by the gas making up the filaments was a holy grail which has now been attained by an international team headed by Roland Bacon, CNRS researcher at the Centre de Recherche Astrophysique de Lyon (CNRS/Université Lyon 1/ENS de Lyon).
Figure 3: one of the hydrogen filaments (in blue) discovered by MUSE in the Hubble Ultra-Deep Field. It is located in the constellation Fornax at a distance of 11.5 billion light years, and stretches across 15 million light years. The image in the background is from Hubble. Credit: Roland Bacon, David Mary, ESO and NASA
The team took the bold step of pointing ESO’s Very Large Telescope, equipped with the MUSE instrument coupled to the telescope’s adaptive optics system, at a single region of the sky for over 140 hours. Together, the two instruments form one of the most powerful systems in the world.[3] The region selected forms part of the Hubble Ultra-Deep Field, which was until now the deepest image of the cosmos ever obtained. However, Hubble has now been surpassed, since 40% of the galaxies discovered by MUSE have no counterpart in the Hubble images.
Figure 4: cosmological simulation of a filament made up of hundreds of thousands of small galaxies. The image on the left shows the emissions produced by all the galaxies as it might be observed in situ. The image on the right shows the filament as it would be seen by MUSE. Even with a very long exposure time, the vast majority of the galaxies cannot be detected individually. However, the light from all these small galaxies is detected as a diffuse background, rather like the Milky Way when seen with the naked eye. Credit: Thibault Garel and Roland Bacon
After meticulous planning, it took eight months to carry out this exceptional observing campaign. This was followed by a year of data processing and analysis, which for the first time revealed light from the hydrogen filaments, as well as images of several filaments as they were one to two billion years after the Big Bang, a key period for understanding how galaxies formed from the gas in the cosmic web [figures 2 et 3]. However, the biggest surprise for the team was when simulations showed that the light from the gas came from a hitherto invisible population of billions of dwarf galaxies spawning a host of stars [figure 4].[4] Although these galaxies are too faint to be detected individually with current instruments, their existence will have major consequences for galaxy formation models, with implications that scientists are only just beginning to explore.
Notes
MUSE, which stands for Multi Unit Spectroscopic Explorer, is a 3D spectrograph designed to explore the distant Universe. The construction of the instrument was led by the Centre de Recherche Astrophysique de Lyon (CNRS/Université Claude Bernard-Lyon 1/ENS de Lyon).
Other French laboratories involved: Laboratoire d’Astrophysique de Marseille (CNRS/Aix-Marseille Université/CNES), Institut de Recherche en Astrophysique et Planétologie (CNRS/Université Toulouse III – Paul Sabatier/CNES).
Until now, theory predicted that the light came from the diffuse cosmic ultraviolet background radiation (very weak background radiation produced by all the galaxies and stars) which, by heating the gas in the filaments, causes them to glow.
Reference: “The MUSE Extremely Deep Field: The cosmic web in emission at high redshift” by R. Bacon, D. Mary, T. Garel, J. Blaizot, M. Maseda, J. Schaye, L. Wisotzki, S. Conseil, J. Brinchmann, F. Leclercq, V. Abril-Melgarejo, L. Boogaard, N. F. Bouché, T. Contini, A. Feltre, B. Guiderdoni, C. Herenz, W. Kollatschny, H. Kusakabe, J. Matthee, L. Michel-Dansac, T. Nanayakkara, J. Richard, M. Roth, K. B. Schmidt, M. Steinmetz, L. Tresse, T. Urrutia, A. Verhamme, P. M. Weilbacher, J. Zabl and S. L. Zoutendijk, 18 March 2021, Astronomy & Astrophysic. DOI: 10.1051/0004-6361/202039887
A lava flow from Hawaii’s Kilauea Volcano enters the ocean near Isaac Hale Beach Park on August 5, 2018. The volcano’s 2018 eruption was its largest in over 200 years. Credit: USGS
Caldera Collapse Increases the Size and Duration of Volcanic Eruptions
Scientists have figured out what triggers large-scale volcanic eruptions and what conditions likely lead to them.
Hawaii’s Kilauea is one of the most active volcanoes in the world. Because of this and its relative ease of accessibility, it is also among the most heavily outfitted with monitoring equipment – instruments that measure and record everything from earthquakes and ground movement to lava volume and advancement.
Kilauea’s 2018 eruption, however, was especially massive. In fact, it was the volcano’s largest eruption in over 200 years. Scientists at NASA’s Jet Propulsion Laboratory in Southern California used the abundance of data collected from this rare event to shed light on the cause of large-scale eruptions like this one and, perhaps more importantly, what mechanisms trigger them.
This image shows the lava flow from the 2018 eruption of Kilauea prior to the collapse of the caldera. Credit: NASA/JPL-Caltech
“Ultimately, what caused this eruption to be so much larger than normal was the collapse of the volcano’s caldera – the large, craterlike depression at the volcano’s summit,” said JPL’s Alberto Roman, lead author of the new study published recently in Nature. “During a caldera collapse, a massive block of rock near the top of the volcano slides down into the volcano. As it slides, gets stuck on the jagged walls around it, and slides some more, the block of rock squeezes out more magma than would ordinarily be expelled.”
This image shows the much larger lava flow that followed the collapse of the caldera. Credit: NASA/JPL-Caltech
But what the science team really wanted to know was what caused the caldera to collapse in the first place – and they found their answer.
The likely culprit? Vents – openings through which lava flows – located a distance away from, and at a much lower elevation than, the volcano’s summit.
“Sometimes, volcanoes erupt at the summit, but an eruption can also occur when lava breaks through vents much lower down the volcano,” said JPL’s Paul Lundgren, co-author of the study. “Eruption through these low-elevation vents likely led to the collapse of the caldera.”
During an eruption, the surface of a volcano deforms, or changes shape. The color bands in the lower-right animation box show those changes from before to midway through Kilauea’s 2018 eruption. The closer the color bands are to one another, the more severe the deformation in that area – much like the contour lines on a topographic map denote rapidly changing altitude. Credit: Credit: NASA/JPL-Caltech
Lundgren compares this type of vent to the spigot on a collapsible water jug you’d take on a camping trip. As the water level drops toward the location of the spigot, the flow of water slows or stops. Likewise, the lower down the volcano a vent (or “spigot”) is located, the longer lava is likely to flow before reaching a stopping point.
A large quantity of magma can be expelled quickly from the chamber (or chambers) beneath the volcano through these vents, leaving the rocky floor and walls of the caldera above the chamber without sufficient support. The rock from the caldera can then collapse into the magma chamber.
As the rock falls, it pressurizes the magma chambers – for Kilauea, the research team identified two of them – increasing the magma flow to the distant vents as well as the total volume of the eruption. The pressurization is akin to squeezing the water jug to force out the last little bit of water.
After developing their model of these eruption processes, taking advantage of the myriad data available from Kilauea, they also compared the model’s predictions to observations from similar eruptions driven by caldera collapse at other volcanoes. The results were consistent. Even though the model doesn’t predict when a volcano is going to erupt, it can provide crucial insight into the likely severity of an eruption once it begins.
“If we see an eruption at a low-elevation vent, that is a red flag or warning that caldera collapse is possible,” said Roman. “Similarly, if we detect earthquakes consistent with the slipping of the caldera rock block, we now know that the eruption will likely be much larger than usual.”
Reference: “Dynamics of large effusive eruptions driven by caldera collapse” by Alberto Roman and Paul Lundgren, 14 April 2021, Nature. DOI: 10.1038/s41586-021-03414-5
Egyptian goose (Alopochen aegyptiaca) originally from Africa and now established in Central and Western Europe. Credit: Professor Tim Blackburn, UCL
The number of alien (non-native) species, particularly insects, arthropods and birds, is expected to increase globally by 36% by the middle of this century, compared to 2005, finds new research by an international team involving UCL.
Published in Global Change Biology, the study also predicts the arrival of around 2,500 new alien species in Europe, which translates to an increase of 64% for the continent over the 45-year period.
The research team led by the German Senckenberg Biodiversity and Climate Research Centre hope it should be possible to reduce this number with stricter biosecurity regulations.
Alien species are those that humans have moved around the world to places where they do not naturally occur. More than 35,000 such species had been recorded by 2005 (the date of the most recent comprehensive global catalogue). Some of these aliens can go on to become invasive, with damaging impacts to ecosystems and economies. Alien species are one of the main drivers of extinctions of animals and plants.
Co-author Professor Tim Blackburn (UCL Centre for Biodiversity & Environment Research and the Institute of Zoology, ZSL) said: “Our study predicts that alien species will continue to be added to ecosystems at high rates through the next few decades, which is concerning as this could contribute to harmful biodiversity change and extinction.
“But we are not helpless bystanders: with a concerted global effort to combat this, it should be possible to slow down or reverse this trend.”
Box tree moth, native to east Asia and now found across Europe. Credit: Professor Tim Blackburn, UCL
For the study, the research team developed a mathematical model to calculate for the first time how many more aliens would be expected by 2050, based on estimated sizes of source pools (the species that could end up becoming invasive) and dynamics of historical invasions, under a ‘business-as-usual’ scenario that assumes a continuation of current trends.
The model predicts a 36% increase in the number of alien plant and animal species worldwide by 2050, compared to 2005 levels.
The study identifies high levels of variation between regions. The largest increase is expected in Europe, where the number of alien species will increase by 64% by the middle of the century. Additional alien hotspots are predicted to include temperate latitudes of Asia, North America, and South America. The lowest relative increase in alien species is expected in Australia.
Europe will also see the largest increase in absolute numbers of alien species, with around 2,500 new aliens predicted.
Lead author Dr Hanno Seebens (Senckenberg Biodiversity and Climate Research Centre, Germany) said: “These will primarily include rather inconspicuous new arrivals such as insects, molluscs, and crustaceans. In contrast, there will be very few new alien mammal species such as the well-known raccoon.”
Co-author Dr Franz Essl (University of Vienna) added: “Increases are expected to be particularly large for insects and other arthropods, such as arachnids and crustaceans. We predict the number of aliens from these groups to increase in every region of the world by the middle of the century – by almost 120% in the temperate latitudes of Asia.”
The study also predicts that the rate of arrival of alien species will continue to increase, at least in some animal groups. Globally, by 2050, alien arthropod and bird species in particular will arrive faster than before, compared to the period 1960 – 2005. In Europe, the rate of new alien arrivals is expected to increase for all plant and animal groups except mammals.
Neither a reversal nor even a slowdown in the spread of alien species is in sight, as global trade and transport are expected to increase in the coming decades, allowing many species to infiltrate new habitats as stowaways.
Dr Seebens said: “We will not be able to entirely prevent the introduction of alien species, as this would mean severe restrictions in international trade.
“However, stricter regulations and their rigorous enforcement could greatly slow the flow of new species. The benefits of such measures have been shown in some parts of the world. Regulations are still comparatively lax in Europe, and so there is great potential here for new measures to curtail the arrival of new aliens.”
Reference: “Projecting the continental accumulation of alien species through to 2050” by Hanno Seebens, Sven Bacher, Tim M. Blackburn, César Capinha, Wayne Dawson, Stefan Dullinger, Piero Genovesi, Philip E. Hulme, Mark van Kleunen, Ingolf Kühn, Jonathan M. Jeschke, Bernd Lenzner, Andrew M. Liebhold, Zarah Pattison, Jan Pergl, Petr Pyšek, Marten Winter and Franz Essl, 1 October 2020, Global Change Biology. DOI: 10.1111/gcb.15333
Scientists characterize previously unknown gut reactions.
Strictly speaking, humans cannot digest complex carbohydrates — that’s the job of bacteria in our large intestines. UC Riverside scientists have just discovered a new group of viruses that attack these bacteria.
The viruses, and the way they evade counterattack by their bacterial hosts, are described in a paper published in Cell Reports.
Bacterioides can constitute up to 60% of all the bacteria living in a human’s large intestine, and they’re an important way that people get energy. Without them, we’d have a hard time digesting bread, beans, vegetables, or other favorite foods. Given their significance, it is surprising that scientists know so little about viruses that prey on Bacteroides.
“This is largely unexplored territory,” said microbiologist Patrick Degnan, an assistant professor of microbiology and plant pathology, who led the research.
To find a virus that attacks Bacteroides, Degnan and his team analyzed a collection of bacterial genomes, where viruses can hide for numerous generations until something triggers them to replicate, attack, and leave their host. This viral lifestyle is not without risk as over time mutations could occur that prevent the virus from escaping its host.
On analyzing the genome of Bacteroides vulgatus, Degnan’s team found DNA belonging to a virus they named BV01. However, determining whether the virus is capable of escaping, or re-infecting its host, proved challenging.
Reconstructed microscopy image of a bacteriophage, which is a virus that attacks bacteria. Credit: Purdue University and Seyet LLC
“We tried every trick we could think of. Nothing in the laboratory worked until we worked with a germ-free mouse model,” Degnan said. “Then, the virus jumped.”
This was possible due to Degnan’s collaboration with UCR colleague, co-author and fellow microbiologist Ansel Hsiao.
This result suggests conditions in mammalian guts act as a trigger for BV01 activity. The finding underscores the importance of both in vitro and in vivo experiments for understanding the biology of microbes.
Looking for more information about the indirect effect of this bacterial virus might have on humans, Degnan’s team determined that when BV01 infects a host cell, it disrupts how that cell normally behaves.
“Over 100 genes change how they get expressed after infection,” Degnan said.
Two of the altered genes that stood out to the researchers are both responsible for deactivating bile acids, which are toxic to microbes. The authors speculate that while this possibly alters the sensitivity of the bacteria to bile acids, it also may influence the ability of the bacteria to be infected by other viruses.
“This virus can go in and change the metabolism of these bacteria in human guts that are so key for our own metabolism,” Degnan said.
Though the full extent of BV01 infection is not yet known, scientists believe viruses that change the abundance and activity of gut bacteria contribute to human health and disease. One area for future studies will involve the effect of diet on BV01 and viruses like it, as certain foods can cause our bodies to release more bile.
Degnan also notes that BV01 is only one of a group of viruses his team identified that function in similar ways. The group, Salyersviridae, is named after famed microbiologist Abigail Salyers whose work on intestinal bacteria furthered the science of antibiotic resistance.
Further research is planned to understand the biology of these viruses.
“It’s been sitting in plain sight, but no one has characterized this important group of viruses that affect what’s in our guts until now,” Degnan said.
Reference: “Infection with Bacteroides Phage BV01 Alters the Host Transcriptome and Bile Acid Metabolism in a Common Human Gut Microbe” by Danielle E. Campbell, Lindsey K. Ly and Jason M. R, 15 September 2020, Cell Reports. DOI: 10.1016/j.celrep.2020.108142
Rodents and pigs share with certain aquatic organisms the ability to use their intestines for respiration, finds a study publishing May 14th in the journal Med. The researchers demonstrated that the delivery of oxygen gas or oxygenated liquid through the rectum provided vital rescue to two mammalian models of respiratory failure.
“Artificial respiratory support plays a vital role in the clinical management of respiratory failure due to severe illnesses such as pneumonia or acute respiratory distress syndrome,” says senior study author Takanori Takebe (@TakebeLab) of the Tokyo Medical and Dental University and the Cincinnati Children’s Hospital Medical Center. “Although the side effects and safety need to be thoroughly evaluated in humans, our approach may offer a new paradigm to support critically ill patients with respiratory failure.”
Several aquatic organisms have evolved unique intestinal breathing mechanisms to survive under low-oxygen conditions using organs other than lungs or gills. For example, sea cucumbers, freshwater fish called loaches, and certain freshwater catfish use their intestines for respiration. But it has been heavily debated whether mammals have similar capabilities.
In the new study, Takebe and his collaborators provide evidence for intestinal breathing in rats, mice, and pigs. First, they designed an intestinal gas ventilation system to administer pure oxygen through the rectum of mice. They showed that without the system, no mice survived 11 minutes of extremely low-oxygen conditions. With intestinal gas ventilation, more oxygen reached the heart, and 75% of mice survived 50 minutes of normally lethal low-oxygen conditions.
Because the intestinal gas ventilation system requires abrasion of the intestinal muscosa, it is unlikely to be clinically feasible, especially in severely ill patients–so the researchers also developed a liquid-based alternative using oxygenated perfluorochemicals. These chemicals have already been shown clinically to be biocompatible and safe in humans.
The intestinal liquid ventilation system provided therapeutic benefits to rodents and pigs exposed to non-lethal low-oxygen conditions. Mice receiving intestinal ventilation could walk farther in a 10% oxygen chamber, and more oxygen reached their heart, compared to mice that did not receive intestinal ventilation. Similar results were evident in pigs. Intestinal liquid ventilation reversed skin pallor and coldness and increased their levels of oxygen, without producing obvious side effects. Taken together, the results show that this strategy is effective in providing oxygen that reaches circulation and alleviates respiratory failure symptoms in two mammalian model systems.
With support from the Japan Agency for Medical Research and Development to combat the coronavirus disease 2019 (COVID-19) pandemic, the researchers plan to expand their preclinical studies and pursue regulatory steps to accelerate the path to clinical translation.
“The recent SARS-CoV-2 pandemic is overwhelming the clinical need for ventilators and artificial lungs, resulting in a critical shortage of available devices, and endangering patients’ lives worldwide,” Takebe says. “The level of arterial oxygenation provided by our ventilation system, if scaled for human application, is likely sufficient to treat patients with severe respiratory failure, potentially providing life-saving oxygenation.”
Reference: “Mammalian enteral ventilation ameliorates respiratory failure” by Ryo Okabe, Toyofumi F. Chen-Yoshikawa, Yosuke Yoneyama, Yuhei Yokoyama, Satona Tanaka, Akihiko Yoshizawa, Wendy L. Thompson, Gokul Kannan, Eiji Kobayashi, Hiroshi Date and Takanori Takebe, 14 May 2021, Med. DOI: 10.1016/j.medj.2021.04.004
This work was supported by Research Program on Emerging and Re-emerging Infectious Diseases, Research Projects on COVID-19, from the Japan Agency for Medical Research and Development, and AMED The Translational Research program and AMED Program for technological innovation of regenerative medicine.
Staff engineer Bruis van Vlijmen is seen working inside the Battery Informatics Lab 1070 in the Arrillaga Science Center, Bldg. 57. Credit: Jacqueline Orrell/SLAC National Accelerator Laboratory
An X-ray instrument at Berkeley Lab contributed to a battery study that used an innovative approach to machine learning to speed up the learning curve about a process that shortens the life of fast-charging lithium batteries.
Researchers used Berkeley Lab’s Advanced Light Source, a synchrotron that produces light ranging from the infrared to X-rays for dozens of simultaneous experiments, to perform a chemical imaging technique known as scanning transmission X-ray microscopy, or STXM, at a state-of-the-art ALS beamline dubbed COSMIC.
Researchers also employed “in situ” X-ray diffraction at another synchrotron – SLAC’s Stanford Synchrotron Radiation Lightsource – which attempted to recreate the conditions present in a battery, and additionally provided a many-particle battery model. All three forms of data were combined in a format to help the machine-learning algorithms learn the physics at work in the battery.
While typical machine-learning algorithms seek out images that either do or don’t match a training set of images, in this study the researchers applied a deeper set of data from experiments and other sources to enable more refined results. It represents the first time this brand of “scientific machine learning” was applied to battery cycling, researchers noted. The study was published recently in Nature Materials.
The study benefited from an ability at the COSMIC beamline to single out the chemical states of about 100 individual particles, which was enabled by COSMIC’s high-speed, high-resolution imaging capabilities. Young-Sang Yu, a research scientist at the ALS who participated in the study, noted that each selected particle was imaged at about 50 different energy steps during the cycling process, for a total of 5,000 images.
The data from ALS experiments and other experiments were combined with data from fast-charging mathematical models, and with information about the chemistry and physics of fast charging, and then incorporated into the machine-learning algorithms.
“Rather than having the computer directly figure out the model by simply feeding it data, as we did in the two previous studies, we taught the computer how to choose or learn the right equations, and thus the right physics,” said Stanford postdoctoral researcher Stephen Dongmin Kang, a study co-author.
Patrick Herring, senior research scientist for Toyota Research Institute, which supported the work through its Accelerated Materials Design and Discovery program, said, “By understanding the fundamental reactions that occur within the battery, we can extend its life, enable faster charging, and ultimately design better battery materials.”
Reference: “Fictitious phase separation in Li layered oxides driven by electro-autocatalysis” by Jungjin Park, Hongbo Zhao, Stephen Dongmin Kang, Kipil Lim, Chia-Chin Chen, Young-Sang Yu, Richard D. Braatz, David A. Shapiro, Jihyun Hong, Michael F. Toney, Martin Z. Bazant and William C. Chueh, 8 March 2021, Nature Materials. DOI: 10.1038/s41563-021-00936-1
Rapidly rotating neutron stars may be “humming” continuous gravitational waves. Credit: K. Wette
Remember the days before working from home? It’s Monday morning, you’re running late to beat the traffic, and you can’t find your car keys. What do you do? You might try moving from room to room, casting your eye over every flat surface, in the hope of spotting the missing keys. Of course, this assumes that they are somewhere in plain sight; if they’re hidden under a newspaper, or fallen behind the sofa, you’ll never spot them. Or you might be so convinced that you last saw the keys in the kitchen and search for them there: inside every cupboard, the microwave, dishwasher, back of the fridge, etc. Of course, if you left them on your bedside table, upending the kitchen is doomed to failure. So, which is the best strategy?
Scientists face a similar conundrum in the hunt for gravitational waves—ripples in the fabric of space and time—from rapidly spinning neutron stars. These stars are the densest objects in the Universe and, provided they’re not perfectly spherical, emit a very faint “hum” of continuous gravitational waves. Hearing this “hum” would allow scientists to peer deep inside a neutron star and discover its secrets, yielding new insights into the most extreme states of matter. However, our very sensitive “ears”—4-kilometer-sized detectors using powerful lasers—haven’t heard anything yet.
Part of the challenge is that, like the missing keys, scientists aren’t sure of the best search strategy. Most previous studies have taken the “room-to-room” approach, trying to find continuous gravitational waves in as many different places as possible. But this means you can only spend a limited amount of time listening for the tell-tale “hum” in any one location—in the same way that you can only spend so long staring at your coffee table, trying to discern a key-shaped object. And since the “hum” is very quiet, there’s a good chance you won’t even hear it.
In a study recently published in Physical Review D, a team of scientists, led by postdoctoral researcher Karl Wette from the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav) at the Australian National University, tried the “where else could they be but the kitchen?” approach.
Wette explains: “We took an educated guess at a specific location where continuous gravitational waves might be, based in part on what we already know about pulsars—they’re like neutron stars but send out radio waves instead of continuous gravitational waves. We hypothesized that there would be continuous gravitational waves detected near pulsar radio waves.” Just like guessing that your missing keys will probably be close to your handbag or wallet.
Using existing observational data, the team spent a lot of time searching in this location (nearly 6000 days of computer time!) listening carefully for that faint “hum.” They also used graphic processing units—specialist electronics normally used for computer games—making their algorithms run super-fast.
“Our search was significantly more sensitive than any previous search for this location,” says Wette. “Unfortunately, we didn’t hear anything, so our guess was wrong this time. It’s back to the drawing board for now, but we’ll keep listening.”
Reference: “Deep exploration for continuous gravitational waves at 171–172 Hz in LIGO second observing run data” by Karl Wette, Liam Dunn, Patrick Clearwater and Andrew Melatos, 23 March 2021, Physical Review D. DOI: 10.1103/PhysRevD.103.083020
A Van Leeuwenhoek microscope. Credit: Utrecht University/Rijksmuseum Boerhaave/TU Delft
A microscope used by Antoni van Leeuwenhoek to conduct pioneering research contains a surprisingly ordinary lens, as new research by Rijksmuseum Boerhaave Leiden and TU Delft shows. It is a remarkable finding, because Van Leeuwenhoek (1632-1723) led other scientists to believe that his instruments were exceptional. Consequently, there has been speculation about his method for making lenses for more than three centuries. The results of this study were published in Science Advances today (May 14, 2021).
Previous research carried out in 2018 already indicated that some of Van Leeuwenhoek’s microscopes contained common ground lenses. Researchers have now examined a particularly highly magnifying specimen, from the collection of the University Museum Utrecht. Although it did contain a different type of lens, the great surprise was that the lens-making method used was a common one.
TU Delft researcher Lambert van Eijck and curators Tiemen Cocquyt and Auke Gerrits of Rijksmuseum Boerhaave at the Reactor Institute Delft in The Netherlands. Credit: TU Delft
Pioneering but secretive
With his microscopes, Antoni van Leeuwenhoek saw a whole new world full of minute life which nobody had ever suspected could exist. He was the first to observe unicellular organisms, which is why he is called the father of microbiology. The detail of his observations was unprecedented and was only superseded over a century after his death.
His contemporaries were very curious about the lenses with which Van Leeuwenhoek managed to achieve such astounding feats. Van Leeuwenhoek, however, was very secretive about it, suggesting he had found a new way of making lenses. It now proves to have been an empty boast, at least as far as the Utrecht lens is concerned. This became clear when the researchers from Rijksmuseum Boerhaave Leiden and TU Delft subjected the Utrecht microscope to neutron tomography. It enabled them to examine the lens without opening the valuable microscope and destroy it in the process. The instrument was placed in a neutron beam at the Reactor Institute Delft, yielding a three-dimensional image of the lens.
Microscope lenses reconstructed according to the method of Robert Hooke, which Antoni van Leeuwenhoek also used for his highly magnifying microscopes. Credit: Rijksmuseum Boerhaave/TU Delft
Small globule
This lens turned out to be a small globule, and its appearance was consistent with a known production method used in Van Leeuwenhoek’s time. The lens was very probably made by holding a thin glass rod in the fire, so that the end curled up into a small ball, which was then broken off the glass rod.
This method was described in 1678 by another influential microscopist, the Englishman Robert Hooke, which inspired other scientists to do the same. Van Leeuwenhoek, too, may have taken his lead from Hooke. The new discovery is ironical, because it was in fact Hooke who was very curious to learn more about Van Leeuwenhoek’s ‘secret’ method.
The new study shows that Van Leeuwenhoek obtained extraordinary results with strikingly ordinary lens production methods.
The Van Leeuwenhoek microscope in question, property of the University Museum of Utrecht University. Credit: Utrecht University/Rijksmuseum Boerhaave/TU Delft
Ion beams can create chains of closely coupled quantum bits (qubits) based on nitrogen-vacancy “color centers” in diamond for use in quantum computing hardware. The honeycomb pattern in the photo shows the difference between areas exposed to the beam (darker) and masked-off areas. Results indicate it should be possible to create 10,000 coupled qubits over a distance of about the width of a human hair, an unrivaled number and density of qubits. Credit: Susan Brand/Berkeley Lab)
A new way to form self-aligned ‘color centers’ promises scalability to over 10,000 qubits for applications in quantum sensing and quantum computing.
Achieving the immense promise of quantum computing requires new developments at every level, including the computing hardware itself. A Lawrence Berkeley National Laboratory (Berkeley Lab)-led international team of researchers has discovered a way to use ion beams to create long strings of “color center” qubits in diamond. Their work is detailed in the journal Applied Physics Letters.
The authors includes several from Berkeley Lab: Arun Persaud, who led the study, and Thomas Schenkel, head of the Accelerator Technology and Applied Physics (ATAP) Division’s Fusion Science & Ion Beam Technology Program, as well as Casey Christian (now with Berkeley Lab’s Physics Division), Edward Barnard of Berkeley Lab’s Molecular Foundry, and ATAP affiliate Russell E. Lake.
Creating large numbers of high-quality quantum bits (qubits), in close enough proximity for coupling to each other, is one of the great challenges of quantum computing. Collaborating with colleagues worldwide, the team has been exploring the use of ion beams to create artificial color centers in diamond for use as qubits.
Color centers are microscopic defects – departures from the rigorous lattice structure of a crystal, such as diamond. The type of defect that is of specific interest for qubits is a nitrogen atom next to a vacancy, or empty space, in a diamond lattice. (Nitrogen is commonly found in the crystal lattice of diamond, which is primarily a crystalline form of carbon, and can contribute to the color of the stone.)
When excited by the rapid energy deposition of a passing ion, nitrogen-vacancy centers can form in the diamond lattice. The electron and nuclear spins of nitrogen-vacancy centers and the adjacent carbon atoms can all function as solid-state qubits, and the crystal lattice can help protect their coherence and mutual entanglement.
ATAP Division staff scientist Arun Persaud, principal investigator of this effort. Credit: Marilyn Sargent/Berkeley Lab
The result is a physically durable system that does not have to be used in a cryogenic environment, which are attractive attributes for quantum sensors and also for qubits in this type of solid-state quantum computer. However, making enough qubits, and making them close enough to each other, has been a challenge.
When swift (high-energy) heavy ions such as the beams this team used – gold ions with a kinetic energy of about one billion electron volts – pass through a material, such as nitrogen-doped diamond, they leave a trail of nitrogen-vacancy centers along their tracks. Color centers were found to form directly, without need for further annealing (heat treatment). What’s more, they formed all along the ion tracks, rather than only at the end of the ion range as had been expected from earlier studies with lower-energy ions. In these straight “percolation chains,” color-center qubits are aligned over distances of tens of microns, and are just a few nanometers from their nearest neighbors. A technique developed by Berkeley Lab’s Molecular Foundry measured color centers with depth resolution.
The work on qubit synthesis far from equilibrium was supported by the Department of Energy’s Office of Science. The next step in the research will be to physically cut out a group of these color centers – which are like a series of beads on a string – and show that they are indeed so closely coupled that they can be used as quantum registers.
Results published in the current article show that it will be possible to form quantum registers with up to about 10,000 coupled qubits – two orders of magnitude greater than achieved thus far with the complementary technology of ion-trap qubits – over a distance of about 50 microns (about the width of a human hair).
“Interactions of swift heavy ions with materials have been studied for decades for a variety of purposes, including the behavior of nuclear materials and the effects of cosmic rays on electronics,” said Schenkel.
He added that researchers worldwide have sought to make quantum materials by artificially inducing color centers in diamond. “The solid-state approaches to quantum computing hardware scale beautifully, but integration has been a challenge. This is the first time that direct formation of color-center qubits along strings has been observed.”
The stars, like diamonds
On a miniscule and ephemeral scale (nanometers and picoseconds) the deposition of energy by the ion beams produces a state of high temperature, which Schenkel likens to the surface of the sun, in the 5000 K range, and pressure. Besides knocking carbon atoms out of the crystal lattice of diamond, this effect could enable fundamental studies of exotic states of transient warm dense matter, a state of matter that is present in many stars and large planets and which is difficult to study directly on Earth.
It might also enable formation of novel qubits with tailored properties that cannot be formed with conventional methods. “This opens a new direction for expanding our ability to form quantum registers,” said Schenkel.
Clockwise from bottom left: ATAP Division postdoctoral scholars Sahel Hakimi and Lieselotte Obst-Huebl, and staff scientists Kei Nakamura and Qing Ji, are shown at the target chamber of the iP2 beamline. A high-intensity, short-focal-length beam line, now under construction with DOE Office of Fusion Energy Sciences support, iP2 will be used for laser-based ion acceleration at the Berkeley Lab Laser Accelerator Center (BELLA). Laser-plasma ion acceleration offers the hope of performing many functions using a facility substantially smaller than conventional accelerators. Credit: Thor Swift/Berkeley Lab
Currently, color-center strings are formed with beams from large particle accelerators, such as the one at the German laboratory GSI that was used in this research. In the future, they might be made using compact laser-plasma accelerators like the ones being developed at the Berkeley Lab Laser Accelerator (BELLA) Center.
The BELLA Center is actively developing its ion-acceleration capabilities with funding by the DOE Office of Science. These capabilities will be used as part of LaserNetUS. Ion pulses from laser-plasma acceleration are very intense and greatly expand our ability to form transient states of highly excited and hot materials for qubit synthesis under novel conditions.
More facets in materials science far from equilibrium
The process of creating these color centers is interesting in its own right and has to be better understood as part of further progress in these applications. The details of how an intense ion beam deposits energy as it traverses the diamond samples, and the exact mechanism by which this leads to color-center formation, hold exciting prospects for further research.
“This work demonstrates both the discovery science opportunities and the potential for societally transformative innovations enabled by the beams from accelerators,” says ATAP Division Director Cameron Geddes. “With accelerators, we create unique states of matter and new capabilities that are not possible by other means.”
Reference: “Direct formation of nitrogen-vacancy centers in nitrogen doped diamond along the trajectories of swift heavy ions” by Russell E. Lake, Arun Persaud, Casey Christian, Edward S. Barnard, Emory M. Chan, Andrew A. Bettiol, Marilena Tomut, Christina Trautmann and Thomas Schenkel, 24 February 2021, Applied Physics Letters. DOI: 10.1063/5.0036643
With a frilled head and beaked face, Menefeeceratops sealeyi, discovered in New Mexico, lived 82 million years ago. It predated its better-known relative, Triceratops. Credit: Sergey Krasovskiy
With a frilled head and beaked face, Menefeeceratops sealeyi lived 82 million years ago, predating its relative, Triceratops. Researchers including Peter Dodson, of the School of Veterinary Medicine, and Steven Jasinski, who recently earned his doctorate from the School of Arts & Sciences, describe the find.
A newly described horned dinosaur that lived in New Mexico 82 million years ago is one of the earliest known ceratopsid species, a group known as horned or frilled dinosaurs. Researchers reported their find in a publication in the journal PalZ (Paläontologische Zeitschrift).
Menefeeceratops sealeyi adds important information to scientists’ understanding of the evolution of ceratopsid dinosaurs, which are characterized by horns and frills, along with beaked faces. In particular, the discovery sheds light on the centrosaurine subfamily of horned dinosaurs, of which Menefeeceratops is believed to be the oldest member. Its remains offer a clearer picture of the group’s evolutionary path before it went extinct at the end of the Cretaceous.
Steven Jasinski, who recently completed his Ph.D. in Penn’s Department of Earth and Environmental Science in the School of Arts & Sciences, and Peter Dodson of the School of Veterinary Medicine and Penn Arts & Sciences, collaborated on the work, which was led by Sebastian Dalman of the New Mexico Museum of Natural History and Science. Spencer Lucas and Asher Lichtig of the New Mexico Museum of Natural History and Science in Albuquerque were also part of the research team.
“There has been a striking increase in our knowledge of ceratopsid diversity during the past two decades,” says Dodson, who specializes in the study of horned dinosaurs. “Much of that has resulted from discoveries farther north, from Utah to Alberta. It is particularly exciting that this find so far south is significantly older than any previous ceratopsid discovery. It underscores the importance of the Menefee dinosaur fauna for the understanding of the evolution of Late Cretaceous dinosaur faunas throughout western North America.”
The fossil specimen of the new species, including multiple bones from one individual, was originally discovered in 1996 by Paul Sealey, a research associate of the New Mexico Museum of Natural History and Science, in Cretaceous rocks of the Menefee Formation in northwestern New Mexico. A field crew from the New Mexico Museum of Natural History and Science collected the specimen. Tom Williamson of the New Mexico Museum of Natural History and Science briefly described it the following year, and recent research on other ceratopsid dinosaurs and further preparation of the specimen shed important new light on the fossils.
Based on the latest investigations, researchers determined the fossils represent a new species. The genus name Menefeeceratops refers to the rock formation in which it was discovered, the Menefee Formation, and to the group of which the species is a part, Ceratopsidae. The species name sealeyi honors Sealey, who unearthed the specimen.
Menefeeceratops is related to but predates Triceratops, another ceratopsid dinosaur. However Menefeeceratops was a relatively small member of the group, growing to around 13 to 15 feet long, compared to Triceratops, which could grow to up to 30 feet long.
Horned dinosaurs were generally large, rhinoceros-like herbivores that likely lived in groups or herds. They were significant members of Late Cretaceous ecosystems in North America. “Ceratopsids are better known from various localities in western North America during the Late Cretaceous near the end of the time of dinosaurs,” says Jasinski. “But we have less information about the group, and their fossils are rarer, when you go back before about 79 million years ago.”
Although bones of the entire dinosaur were not recovered, a significant amount of the skeleton was preserved, including parts of the skull and lower jaws, forearm, hindlimbs, pelvis, vertebrae, and ribs. These bones not only show the animal is unique among known dinosaur species but also provide additional clues to its life history. For example, the fossils show evidence of a potential pathology, resulting from a minor injury or disease, on at least one of the vertebrae near the base of its spinal column.
Some of the key features that distinguish Menefeeceratops from other horned dinosaurs involve the bone that make up the sides of the dinosaur’s frill, known as the squamosal. While less ornate than those of some other ceratopsids, Menefeeceratops’ squamosal has a distinct pattern of concave and convex parts.
Comparing features of Menefeeceratops with other known ceratopsid dinosaurs helped the research team trace its evolutionary relationships. Their analysis places Menefeeceratops sealeyi at the base of the evolutionary tree of the centrosaurines subfamily, suggesting that not only is Menefeeceratops one of the oldest known centrosaurine ceratopsids, but also one of the most basal evolutionarily.
Menefeeceratops was part of an ancient ecosystem with numerous other dinosaurs, including the recently recognized nodosaurid ankylosaur Invictarx and the tyrannosaurid Dynamoterror, as well as hadrosaurids and dromaeosaurids.
“Menefeeceratops was part of a thriving Cretaceous ecosystem in the southwestern United States with dinosaurs that predated a lot of the more well-known members closer to end of the Cretaceous,” says Jasinski.
While relatively less work has been done collecting dinosaurs in the Menefee Formation to date, the researchers hope that more field work and collecting in these areas, together with new analyses, will turn up more fossils of Menefeeceratops and ensure a better understanding of the ancient ecosystem of which it was part.
Reference: “The oldest centrosaurine: a new ceratopsid dinosaur (Dinosauria: Ceratopsidae) from the Allison Member of the Menefee Formation (Upper Cretaceous, early Campanian), northwestern New Mexico, USA” by Sebastian G. Dalman, Spencer G. Lucas, Steven E. Jasinski, Asher J. Lichtig and Peter Dodson, 10 May 2021, PalZ. DOI: 10.1007/s12542-021-00555-w
Peter Dodson is a professor of anatomy in the School of Veterinary Medicine and a professor of earth and environmental science in the School of Arts & Sciences at the University of Pennsylvania.
Steven E. Jasinski is a curator of paleontology and geology at the State Museum of Pennsylvania and corporate faculty at Harrisburg University of Science and Technology. He earned his doctoral degree in the Department of Earth and Environmental Science in the University of Pennsylvania’s School of Arts & Sciences.
Sebastian G. Dalman is a research associate at the New Mexico Museum of Natural History and Science in Albuquerque.
Spencer G. Lucas is a curator of paleontology at the New Mexico Museum of Natural History and Science in Albuquerque.
Asher J. Lichtig is a research associate at the New Mexico Museum of Natural History and Science in Albuquerque.
Jasinski was supported by Geo. L. Harrison and Benjamin Franklin fellowships while attending the University of Pennsylvania. The research was also partially funded by a Walker Endowment Research Grant and a University of Pennsylvania Paleontology Research Grant.
Areas within and outside Safe Climatic Space for food production 2081-2100. Credit: Matti Kummu/Aalto University
New estimates show that if greenhouse gases continue growing at current rates, large regions at risk of being pushed into climate conditions in which no food is grown today.
Climate change is known to negatively affect agriculture and livestock, but there has been little scientific knowledge on which regions of the planet would be touched or what the biggest risks may be. New research led by Aalto University assesses just how global food production will be affected if greenhouse gas emissions are left uncut. The study is published in the prestigious journal One Earth on today (Friday, May 14, 2021).
“Our research shows that rapid, out-of-control growth of greenhouse gas emissions may, by the end of the century, lead to more than a third of current global food production falling into conditions in which no food is produced today — that is, out of safe climatic space,” explains Matti Kummu, professor of global water and food issues at Aalto University.
According to the study, this scenario is likely to occur if carbon dioxide emissions continue growing at current rates. In the study, the researchers define the concept of safe climatic space as those areas where 95% of crop production currently takes place, thanks to a combination of three climate factors, rainfall, temperature, and aridity.
High emissions scenario: areas within and outside Safe Climatic Space for food production 2081-2100 (see comparison image for legend). Credit: Matti Kummu/Aalto University
“The good news is that only a fraction of food production would face as-of-yet unseen conditions if we collectively reduce emissions, so that warming would be limited to 1.5 to 2 degrees Celsius,” says Kummu.
Changes in rainfall and aridity as well as the warming climate are especially threatening to food production in South and Southeast Asia as well as the Sahel region of Africa. These are also areas that lack the capacity to adapt to changing conditions.
“Food production as we know it developed under a fairly stable climate, during a period of slow warming that followed the last ice age. The continuous growth of greenhouse gas emissions may create new conditions, and food crop and livestock production just won’t have enough time to adapt,” says Doctoral Candidate Matias Heino, the other main author of the publication.
Close-up of global low emissions scenario: areas within and outside Safe Climatic Space for food production 2081-2100 (see comparison image for legend). Credit: Matti Kummu/Aalto University
Two future scenarios for climate change were used in the study: one in which carbon dioxide emissions are cut radically, limiting global warming to 1.5-2 degrees Celsius, and another in which emissions continue growing unhalted.
The researchers assessed how climate change would affect 27 of the most important food crops and seven different livestock, accounting for societies’ varying capacities to adapt to changes. The results show that threats affect countries and continents in different ways; in 52 of the 177 countries studied, the entire food production would remain in the safe climatic space in the future. These include Finland and most other European countries.
Already vulnerable countries such as Benin, Cambodia, Ghana, Guinea-Bissau, Guyana and Suriname will be hit hard if no changes are made; up to 95 percent of current food production would fall outside of safe climatic space. Alarmingly, these nations also have significantly less capacity to adapt to changes brought on by climate change when compared to rich Western countries. In all, 20% of the world’s crop production and 18% of livestock production under threat are located in countries with low resilience to adapt to changes.
If carbon dioxide emissions are brought under control, the researchers estimate that the world’s largest climatic zone of today — the boreal forest, which stretches across northern North America, Russia and Europe — would shrink from its current 18.0 to 14.8 million square kilometers by 2100. Should we not be able to cut emissions, only roughly 8 million square kilometers of the vast forest would remain. The change would be even more dramatic in North America: in 2000, the zone covered approximately 6.7 million square kilometers — by 2090 it may shrink to one-third.
Arctic tundra would be even worse off: it is estimated to disappear completely if climate change is not reined in. At the same time, the tropical dry forest and tropical desert zones are estimated to grow.
“If we let emissions grow, the increase in desert areas is especially troubling because in these conditions barely anything can grow without irrigation. By the end of this century, we could see more than 4 million square kilometers of new desert around the globe,” Kummu says.
While the study is the first to take a holistic look at the climatic conditions where food is grown today and how climate change will affect these areas in coming decades, its take-home message is by no means unique: the world needs urgent action.
“We need to mitigate climate change and, at the same time, boost the resilience of our food systems and societies — we cannot leave the vulnerable behind. Food production must be sustainable,” says Heino.
Reference: “Climate change risks pushing one-third of global food production outside the safe climatic space” by Matti Kummu, Matias Heino, Maija Taka, Olli Varis and Daniel Viviroli, 14 May 2021, One Earth. DOI: 10.1016/j.oneear.2021.04.017
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.