NASA Lunar Payloads: New Science Investigations for the Dark Side of the Moon
NASA Lunar Payloads: New Science Investigations for the Dark Side of the Moon
Commercial Lunar Lander

Commercial landers will carry NASA-provided science and technology payloads to the lunar surface, paving the way for NASA astronauts to land on the Moon by 2024. Credit: NASA

As NASA continues plans for multiple commercial deliveries to the Moon’s surface per year, the agency has selected three new scientific investigation payload suites to advance understanding of Earth’s nearest neighbor. Two of the payload suites will land on the far side of the Moon, a first for NASA. All three investigations will receive rides to the lunar surface as part of NASA’s Commercial Lunar Payload Services, or CLPS, initiative, part of the agency’s Artemis approach.

The payloads mark the agency’s first selections from its Payloads and Research Investigations on the Surface of the Moon (PRISM) call for proposals.

“These selections add to our robust pipeline of science payloads and investigations to be delivered to the Moon through CLPS,” said Joel Kearns, deputy associate administrator for exploration in NASA’s Science Mission Directorate. “With each new PRISM selection, we will build on our capabilities to enable bigger and better science and prove technology which will help pave the way for returning astronauts to the Moon through Artemis.”

NASA Moon

Credit: NASA

Lunar Vertex, one of the three selections, is a joint lander and rover payload suite slated for delivery to Reiner Gamma – one of the most distinctive and enigmatic natural features on the Moon, known as a lunar swirl. Scientists don’t fully understand what lunar swirls are or how they form, but they know they are closely related to anomalies associated with the Moon’s magnetic field. The Lunar Vertex rover will make detailed surface measurements of the Moon’s magnetic field using an onboard magnetometer. Lunar surface magnetic field data the rover collects will enhance data the spacecraft collects in orbit around the Moon and help scientists better understand how these mysterious lunar swirls form and evolve, as well as provide further insight into the Moon’s interior and core. Dr. David Blewett of the Johns Hopkins University Applied Physics Laboratory leads this payload suite.

NASA also has selected two separate payload suites for delivery in tandem to Schrödinger basin, which is a large impact crater on the far side of the Moon near the lunar South Pole. The Farside Seismic Suite (FSS), one of the two payloads to be delivered to Schrödinger basin, will carry two seismometers: the vertical Very Broadband seismometer and the Short Period sensor. NASA measured seismic activity on the near side of the Moon as part of the Apollo program, but FSS will return the agency’s first seismic data from the far side of the Moon—a potential future destination for Artemis astronauts. This new data could help scientists better understand tectonic activity on the far side of the Moon, reveal how often the lunar far side is impacted by small meteorites, and provide new constraints on the internal structure of the Moon. FSS will continue taking data for several months on the lunar surface beyond the lifetime of the lander. To survive the two-week long lunar nights, the FSS package will be self-sufficient with independent power, communications, and thermal control.  Dr. Mark Panning of NASA’s Jet Propulsion Laboratory in California leads this payload suite.

60 Years of NASA, Celebrating Where Art and Science Meet

The Lunar Reconnaissance Orbiter captured this image of Schrödinger basin, a large crater near the south pole on the lunar far side. Credit: NASA/LRO/Ernie Wright

The Lunar Interior Temperature and Materials Suite (LITMS), the other payload headed to Schrödinger basin, is a suite of two instruments: the Lunar Instrumentation for Thermal Exploration with Rapidity pneumatic drill and the Lunar Magnetotelluric Sounder. This payload suite will investigate the heat flow and electrical conductivity of the lunar interior in Schrödinger basin, giving an in-depth look at the Moon’s internal mechanical and heat flow. LITMS data also will complement seismic data acquired by the FSS to provide a more complete picture of the near- and deep-subsurface of the far side of the Moon. Dr. Robert Grimm of the Southwest Research Institute leads this payload suite.

While these selections are final, negotiations are continuing for each award amount.

“These investigations demonstrate the power of CLPS to deliver big science in small packages, providing access to the lunar surface to address high priority science goals for the Moon,” said Lori Glaze, director of NASA’s Planetary Science Division. “When scientists analyze these new data alongside lunar samples returned from Apollo and data from our many orbital missions, they will advance our knowledge of the lunar surface and interior, and increase our understanding of crucial phenomenon such as space weathering to inform future crewed missions to the Moon and beyond.”

With these selections in place, NASA will work with the CLPS office at the agency’s Johnson Space Center in Houston to issue task orders to deliver these payload suites to the Moon in the 2024 timeframe.

For these payload suites, the agency also has selected two project scientists to coordinate science activities including selecting landing sites, developing concepts of operations, and archiving science data acquired during surface operations. Dr. Heidi Haviland of NASA’s Marshall Space Flight Center in Huntsville, Alabama, will coordinate the suite slated for delivery to Reiner Gamma, and Dr. Brent Garry of NASA’s Goddard Space Flight Center in Greenbelt, Maryland, will coordinate payload deliveries to Schrödinger basin.

CLPS is a key part of NASA’s Artemis lunar exploration efforts. The science and technology payloads sent to the Moon’s surface as part of CLPS, will help lay the foundation for human missions and a sustainable human presence on the lunar surface. The agency has made six task order awards to CLPS providers for lunar deliveries between late 2021-2023, with more delivery awards expected at least through 2028.

Social Secrets of Killer Whales Revealed by Drone Footage
Social Secrets of Killer Whales Revealed by Drone Footage

Orca Killer Whale

Killer whales have complex social structures including close “friendships,” according to a new study that used drones to film the animals.

The findings show that killer whales spend more time interacting with certain individuals in their pod, and tend to favor those of the same sex and similar age.

The study, led by the University of Exeter and the Center for Whale Research (CWR), also found that the whales become less socially connected as they get older.

“Until now, research on killer whale social networks has relied on seeing the whales when they surface, and recording which whales are together,” said lead author Dr. Michael Weiss, of the University of Exeter.

“However, because resident killer whales stay in the social groups into which they’re born, how closely related whales are seemed to be the only thing that explained their social structure.

“Looking down into the water from a drone allowed us to see details such as contact between individual whales.

“Our findings show that, even within these tight-knit groups, whales prefer to interact with specific individuals. It’s like when your mom takes you to a party as a kid — you didn’t choose the party, but you can still choose who to hang out with once you’re there.”

Killer Whales

Killer whales making contact with each other. Credit: University of Exeter

Patterns of physical contact — one of the social interactions the study measured — suggest that younger whales and females play a central social role in the group. The older the whale, the less central they became.

The new research built on more than four decades of data collected by CWR on southern resident killer whales, a critically endangered population in the Pacific Ocean.

“This study would not have been possible without the amazing work done by CWR,” said Professor Darren Croft, of Exeter’s Centre for Research in Animal Behaviour. “By adding drones to our toolkit, we have been able to dive into the social lives of these animals as never before.

“We were amazed to see how much contact there is between whales — how tactile they are.

“In many species, including humans, physical contact tends to be a soothing, stress-relieving activity that reinforces social connection. We also examined occasions when whales surfaced together — as acting in unison is a sign of social ties in many species.

“We found fascinating parallels between the behavior of whales and other mammals, and we are excited about the next stages of this research.”

Reference: “Age and sex influence social interactions, but not associations, within a killer whale pod” 16 June 2021, Proceedings of the Royal Society B.
DOI: 10.1098/rspb.2021.0617

The start of this drone project — including the purchase of one of the drones used in this study — was made possible by a crowd-funding campaign supported by members of the public, including University of Exeter alumni.

Results from the new study are based on 651 minutes of video filmed over ten days.

The study’s use of drones was conducted under research permits issued by the US National Marine Fisheries Service, and all pilots were licensed under the US Federal Aviation Administration.

The research team included the universities of York and Washington, and the Institute of Biophysics, and the study was partly funded by the Natural Environment Research Council (NERC).

The study, published in the journal Proceedings of the Royal Society B, is entitled: “Age and sex influence social interactions, but not associations, within a killer whale pod.”

Methane-Eating Microbes in Ocean Play Important Role in Moderating Earth's Temperature
Methane-Eating Microbes in Ocean Play Important Role in Moderating Earth’s Temperature
Carbonate Chimneys at Point Dume

Two views of the carbonate chimneys at the Point Dume methane seep off southern California are covered with colorful microbial mats and permeated by methane-eating microbes. Credit: Courtesy of Courtesy of the Schmidt Ocean Institute

Methane-eating microbes help regulate Earth’s temperatures with remarkably high metabolic rates within seafloor carbonate rocks.

Methane is a strong greenhouse gas that plays a key role in Earth’s climate. Anytime we use natural gas, whether we light up our kitchen stove or barbeque, we are using methane.

Only three sources on Earth produce methane naturally: volcanoes, subsurface water-rock interactions, and microbes. Between these three sources, most is generated by microbes, which have deposited hundreds of gigatons of methane into the deep seafloor. At seafloor methane seeps, it percolates upwards toward the open ocean, and microbial communities consume the majority of this methane before it reaches the atmosphere. Over the years, researchers are finding more and more methane beneath the seafloor, yet very little ever leaves the oceans and gets into the atmosphere. Where is the rest going?

A team of researchers led by Jeffrey J. Marlow, former postdoctoral researcher in Organismic and Evolutionary Biology at Harvard University, discovered microbial communities that rapidly consume the methane, preventing its escape into Earth’s atmosphere. The study published in Proceedings of the National Academy of Sciences collected and examined methane-eating microbes from seven geologically diverse seafloor seeps and found, most surprisingly, that the carbonate rocks from one site in particular hosts methane-oxidizing microbial communities with the highest rates of methane consumption measured to date.

“The microbes in these carbonate rocks are acting like a methane bio filter consuming it all before it leaves the ocean,” said senior author Peter Girguis, Professor of Organismic and Evolutionary Biology, Harvard University. Researchers have studied microbes living in seafloor sediment for decades and know these microbes are consuming methane. This study, however, examined microbes that thrive in the carbonate rocks in great detail.

Seafloor carbonate rocks are common, but in select locations, they form unusual chimney-like structures. These chimneys reach 12 to 60 inches in height and are found in groups along the seafloor resembling a stand of trees. Unlike many other types of rocks, these carbonate rocks are porous, creating channels that are home to a very dense community of methane-consuming microbes. In some cases, these microbes are found in much higher densities within the rocks than in the sediment.

During a 2015 expedition funded by the Ocean Exploration Trust, Girguis discovered a carbonate chimney reef off the coast of southern California at the deep sea site Point Dume. Girguis returned in 2017 with funding from NASA to build a sea floor observatory. Upon joining Girguis’s lab, Marlow, currently Assistant Professor of Biology at Boston University, was studying microbes in carbonates. The two decided to conduct a community study and gather samples from the site.

“We measured the rate at which the microbes from the carbonates eat methane compared to microbes in sediment,” said Girguis. “We discovered the microbes living in the carbonates consume methane 50 times faster than microbes in the sediment. We often see that some sediment microbes from methane-rich mud volcanoes, for example, may be five to ten times faster at eating methane, but 50 times faster is a whole new thing. Moreover, these rates are among the highest, if not the highest, we’ve measured anywhere.”

“These rates of methane oxidation, or consumption, are really extraordinary, and we set out to understand why,” said Marlow.

The team found that the carbonate chimney sets up an ideal home for the microbes to eat a lot of methane really fast. “These chimneys exists because some methane in fluid flowing out from the subsurface is transformed by the microbes into bicarbonate, which can then precipitate out of the seawater as carbonate rock,” said Marlow. “We’re still trying to figure out where that fluid — and its methane — is coming from.”

The micro-environments within the carbonates may contain more methane than the sediment due to its porous nature. Carbonates have channels that are constantly irrigating the microbes with fresh methane and other nutrients allowing them to consume methane faster. In sediment, the supply of methane is often limited because it diffuses through smaller, winding channels between mineral grains.

A startling find was that, in some cases, these microbes are surrounded by pyrite, which is electrically conductive. One possible explanation for the high rates of methane consumption is that the pyrite provides an electrical conduit that passes electrons back and forth, allowing the microbes to have higher metabolic rates and consume methane quickly.

“These very high rates are facilitated by these carbonates which provide a framework for the microbes to grow,” said Girguis. “The system resembles a marketplace where carbonates allow a bunch of microbes to aggregate in one place and grow and exchange — in this case, exchange electrons — which allows for more methane consumption.”

Marlow agreed, “When microbes work together they’re either exchanging building blocks like carbon or nitrogen, or they’re exchanging energy. And one kind of way to do that is through electrons, like an energy currency. The pyrite interspersed throughout these carbonate rocks could help that electron exchange happen more swiftly and broadly.”

In the lab, the researchers put the collected carbonates into high pressure reactors and recreated conditions on the sea floor. They gave them isotopically labeled methane with added Carbon-14 or Deuterium (Hydrogen-2) in order to track methane production and consumption. The team next compared the data from Point Dume to six additional sites, from the Gulf of Mexico to the coast of New England. In all locations, carbonate rocks at methane seeps contained methane-eating microbes.

“Next we plan to disentangle how each of these different parts of the carbonates — the structure, electrical conductivity, fluid flow, and dense microbial community — make this possible. As of now, we don’t know the exact contribution of each,” said Girguis.

“First, we need to understand how these microbes sustain their metabolic rate, whether they’re in a chimney or in the sediment. And we need to know this in our changing world in order to build our predictive power,” said Marlow. “Once we clarify how these many interconnected factors come together to turn methane to rock, we can then ask how we might apply these anaerobic methane-eating microbes to other situations, like landfills with methane leaks.”

Reference: “Carbonate-hosted microbial communities are prolific and pervasive methane oxidizers at geologically diverse marine methane seep sites” by Jeffrey J. Marlow, Daniel Hoer, Sean P. Jungbluth, Linda M. Reynard, Amy Gartman, Marko S. Chavez, Mohamed Y. El-Naggar, Noreen Tuross, Victoria J. Orphan and Peter R. Girguis, Proceedings of the National Academy of Sciences.
DOI: 10.1073/pnas.2006857118

Seabird Eggs Contaminated With Chemical Cocktail of Plastic Additives
Seabird Eggs Contaminated With Chemical Cocktail of Plastic Additives
Herring Gull Chick and Eggs

A herring gull chick and eggs. Credit: Prof Jon Blount

Chemical additives used in plastic production have been found in herring gull eggs, new research shows.

Phthalates are a group of chemicals added to plastics to keep them flexible.

The study, by the universities of Exeter and Queensland, looked for evidence of phthalates in newly laid herring gull eggs — and found up to six types of phthalate per egg.

Phthalates function as pro-oxidants — potentially causing “oxidative stress” that can damage cells.

“Herring gull mothers pass on vital nutrients to their offspring via their eggs,” said Professor Jon Blount, of the Centre for Ecology and Conservation at the University of Exeter’s Penryn Campus in Cornwall.

“This includes lipids that nourish developing embryos, and vitamin E, which helps to protect chicks from oxidative stress that can occur during development and at hatching.

“Unfortunately, our findings suggest that mothers are inadvertently passing on phthalates and products of lipid damage — and eggs with higher phthalate contamination also contained greater amounts of lipid damage and less vitamin E.”

The impact of this on developing chicks is unknown, and further investigation is needed.

Researchers collected 13 herring gulls eggs from sites in Cornwall, UK, and all 13 were found to contain phthalates.

Phthalates — which are used in most plastic products and readily leech out — are now found in almost every environment on Earth. They can “bio-accumulate” (build up in living organisms) by becoming concentrated in fatty tissues.

The study does not show where the gulls acquired the phthalates, but phthalates have previously been found in species preyed on by herring gulls, and the birds are known to swallow plastic.

“Research on the impact of plastic on animals has largely focussed on entanglement and ingestion of plastic fragments,” Professor Blount said. “Far less is known about the impacts of plastic additives on the body.

“By testing eggs, our study gives us a snapshot of the mother’s health — and it appears phthalate contamination could be associated with increased oxidative stress, and mothers transfer this cost to their offspring via the egg.

“More research is now needed to discover how developing offspring are affected by being exposed to phthalates before they have even emerged as a hatchling.”

He added: “We need to look more deeply into the pervasive threats of plastics — not just the breakdown of plastic items themselves, but also the dispersal of the multiple chemicals they contain.

“Where do these end up, and what effects are they having on wildlife and ecosystems?”

Reference: “Phthalate diversity in eggs and associations with oxidative stress in the European herring gull (Larus argentatus)” 16 June 2021, Marine Pollution Bulletin.

The study received an initiator grant from QUEX, a partnership between the universities of Exeter and Queensland.

Efficient Gene Therapy: A New Technique for Correcting Disease-Causing Mutations
Efficient Gene Therapy: A New Technique for Correcting Disease-Causing Mutations
RAD51 Staining Embryo

Staining for RAD51 (bright cyan-colored dot) in a fertilized one-cell mouse embryo shows repair of a CRISPR-induced DNA break. Credit: Image courtesy of the researchers.

Novel method, developed by McGovern Institute researchers, may lead to safer, more efficient gene therapies.

Gene editing, or purposefully changing a gene’s DNA sequence, is a powerful tool for studying how mutations cause disease, and for making changes in an individual’s DNA for therapeutic purposes. A novel method of gene editing that can be used for both purposes has now been developed by a team led by Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT.

“This technical advance can accelerate the production of disease models in animals and, critically, opens up a brand-new methodology for correcting disease-causing mutations,” says Feng, who is also a member of the Broad Institute of Harvard and MIT and the associate director of the McGovern Institute for Brain Research at MIT. The new findings were published online on May 26, 2021, in the journal Cell.

Genetic models of disease

A major goal of the Feng lab is to precisely define what goes wrong in neurodevelopmental and neuropsychiatric disorders by engineering animal models that carry the gene mutations that cause these disorders in humans. New models can be generated by injecting embryos with gene editing tools, along with a piece of DNA carrying the desired mutation.

In one such method, the gene editing tool CRISPR is programmed to cut a targeted gene, thereby activating natural DNA mechanisms that “repair” the broken gene with the injected template DNA. The engineered cells are then used to generate offspring capable of passing the genetic change on to further generations, creating a stable genetic line in which the disease, and therapies, are tested.

Although CRISPR has accelerated the process of generating such disease models, the process can still take months or years. Reasons for the inefficiency are that many treated cells do not undergo the desired DNA sequence change at all, and the change only occurs on one of the two gene copies (for most genes, each cell contains two versions, one from the father and one from the mother).

In an effort to increase the efficiency of the gene editing process, the Feng lab team initially hypothesized that adding a DNA repair protein called RAD51 to a standard mixture of CRISPR gene editing tools would increase the chances that a cell (in this case a fertilized mouse egg, or one-cell embryo) would undergo the desired genetic change.

As a test case, they measured the rate at which they were able to insert (“knock-in”) a mutation in the gene Chd2 that is associated with autism. The overall proportion of embryos that were correctly edited remained unchanged, but to their surprise, a significantly higher percentage carried the desired gene edit on both chromosomes. Tests with a different gene yielded the same unexpected outcome.

“Editing of both chromosomes simultaneously is normally very uncommon,” explains postdoc Jonathan Wilde. “The high rate of editing seen with RAD51 was really striking, and what started as a simple attempt to make mutant Chd2 mice quickly turned into a much bigger project focused on RAD51 and its applications in genome editing,” says Wilde, who co-authored the Cell paper with research scientist Tomomi Aida.

A molecular copy machine

The Feng lab team next set out to understand the mechanism by which RAD51 enhances gene editing. They hypothesized that RAD51 engages a process called interhomolog repair (IHR), whereby a DNA break on one chromosome is repaired using the second copy of the chromosome (from the other parent) as the template.

To test this, they injected mouse embryos with RAD51 and CRISPR but left out the template DNA. They programmed CRISPR to cut only the gene sequence on one of the chromosomes, and then tested whether it was repaired to match the sequence on the uncut chromosome. For this experiment, they had to use mice in which the sequences on the maternal and paternal chromosomes were different.

They found that control embryos injected with CRISPR alone rarely showed IHR repair. However, addition of RAD51 significantly increased the number of embryos in which the CRISPR-targeted gene was edited to match the uncut chromosome.

“Previous studies of IHR found that it is incredibly inefficient in most cells,” says Wilde. “Our finding that it occurs much more readily in embryonic cells and can be enhanced by RAD51 suggest that a deeper understanding of what makes the embryo permissive to this type of DNA repair could help us design safer and more efficient gene therapies.”

A new way to correct disease-causing mutations   

Standard gene therapy strategies that rely on injecting a corrective piece of DNA to serve as a template for repairing the mutation engage a process called homology-directed repair (HDR).

“HDR-based strategies still suffer from low efficiency and carry the risk of unwanted integration of donor DNA throughout the genome,” explains Feng. “IHR has the potential to overcome these problems because it relies upon natural cellular pathways and the patient’s own normal chromosome for correction of the deleterious mutation.”

Feng’s team went on to identify additional DNA repair-associated proteins that can stimulate IHR, including several that not only promote high levels of IHR, but also repress errors in the DNA repair process. Additional experiments that allowed the team to examine the genomic features of IHR events gave deeper insight into the mechanism of IHR and suggested ways that the technique can be used to make gene therapies safer.

“While there is still a great deal to learn about this new application of IHR, our findings are the foundation for a new gene therapy approach that could help solve some of the big problems with current approaches,” says Aida.

Reference: “Efficient embryonic homozygous gene conversion via RAD51-enhanced interhomolog repair” by Jonathan J. Wilde, Tomomi Aida, Ricardo C.H. del Rosario, Tobias Kaiser, Peimin Qi, Martin Wienisch, Qiangge Zhang, Steven Colvin and  Guoping Feng, 26 May 2021, Cell.
DOI: 10.1016/j.cell.2021.04.035

This study was supported by the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the Poitras Center for Psychiatric Disorders Research at MIT, an NIH/NIMH Conte Center Grant, and the NIH Office of the Director.

Catalytic Converter Theft Is on the Rise – Here's Why
Catalytic Converter Theft Is on the Rise – Here’s Why

Catalytic Converter

Catalytic converters cut down on toxic car emissions, and, according to the U.S. Environmental Protection Agency, they’re one of the greatest environmental inventions of all time. Today, catalytic converter theft is on the rise, and that’s partly because of their chemistry.

Video Transcript:

If you turn on your car and it sounds like a lawnmower, congratulations! Your catalytic converter has been stolen.

Apparently, that’s been happening to a lot of people lately.

[RYAN] I got in the car and turned it on and it sounded like I was starting up a drag car.

[JEAN] There was this horrendous roar, like I was at a speedway, no muffler. I called my mechanic and he said, they stole your catalytic converter. That’s all I had to tell him, and he knew.

[RYAN] I called a mechanic that was right down the street, just a few blocks away, and I told them about it and they were like, oh, that’s the catalytic converter, we’ve gotten three of those this week.

[SAM] Catalytic converters were first introduced on a large scale in the 1970s in the US, where air pollution was becoming a huge issue. They cut down on most toxic car emissions by 99 percent, and according to the US Environmental Protection Agency, they’re one of the greatest environmental inventions of all time.

So why are they being stolen?

To really understand why catalytic converters are being stolen we need to start at the beginning, with the engine.

When you drive your engine burns the gasoline to produce the energy that moves your car, and the gasoline that it burns is mostly just different types of hydrocarbons. In a perfect world, those hydrocarbons would combust completely, so they’d combine with oxygen to form carbon dioxide and water vapor, nothing else.

But in the real world gasoline doesn’t combust perfectly, and incomplete combustion can leave you with toxic gases like carbon monoxide and nitrogen oxides coming out your tailpipe. And there used to be no regulation for that.

Oh, the air is filled with carbon monoxide? Suck it up, weaklings.

But then in 1970, the Clean Air Act allowed the EPA to regulate air pollution. That’s where the catalytic converter came in.

It is right here, between the engine, and the muffler.

Catalytic converters quickly became a very effective way of cutting down on that air pollution. But what makes them so effective?

They use transition metals, specifically combinations of platinum, palladium, and rhodium. These metals are great at giving up and taking back electrons, which makes them good catalysts. That means they can speed up reactions in other molecules without changing themselves.

When the toxic fumes from incomplete combustion in your engine make their way to the catalytic converter, they first reach platinum and rhodium. In this case platinum and rhodium are speeding up a reduction reaction.

In a reduction reaction, a compound loses oxygen atoms and/or gains electrons. Platinum and rhodium reduce the toxic nitrogen oxide compounds by pulling off their oxygen atoms, which are then released as oxygen gas. Then the remaining nitrogen atoms react with each other and nitrogen gas is released.

So we went from toxic nitrogen dioxide or nitric oxide to harmless nitrogen gas and oxygen gas.

Next platinum and palladium speed up an oxidation reaction. An oxidation reaction is the opposite of a reduction reaction. A compound gains oxygen atoms and or loses electrons. Platinum and palladium gather up oxygens and use them to oxidize leftover gasoline hydrocarbons and carbon monoxide, producing mostly carbon dioxide and water.

So why are people stealing catalytic converters?

Over the last few decades more and more countries have been enacting stricter and stricter emission standards, so there’s more demand for catalytic converters, which means there’s more demand for their metals.

Platinum, palladium, and rhodium are all extremely rare, so their prices have gone way up.

On top of that they’re also used in a lot of electronics, which boosts demand even more.

Rhodium was about 600 bucks per ounce five years ago. Today it’s over 21,000 dollars per ounce, which is 10 times the price of gold. And one catalytic converter has about 400 dollars worth of rhodium in it.

So people are cutting catalytic converters out of cars and selling them for scrap. Some states are trying to really tightly regulate scrap metal sales to hopefully cut down on catalytic converter theft. There are also researchers looking at less expensive metals for catalytic converters which could help a lot.

Catalytic converters are really important, but they’re not perfect. Our cars are still releasing CO2 into the atmosphere, and catalytic converters are releasing little bits of platinum, palladium, and rhodium, which could be bad for us and other animals.

Unfortunately there’s still no silver bullet for solving all our car emission problems. If you’re burning gasoline, you’re just gonna have emissions.

Subatomic Particle Seen Changing to Antiparticle and Back for the First Time in Extraordinary Experiment
Subatomic Particle Seen Changing to Antiparticle and Back for the First Time in Extraordinary Experiment

Matter Antimatter Concept

A team of physicists, including the University of Warwick, have proved that a subatomic particle can switch into its antiparticle alter-ego and back again, in a new discovery just revealed last week.

“This new result shows for the first time that charm mesons can oscillate between the two states.”

An extraordinarily precise measurement made by UK researchers using the LHCb experiment at CERN has provided the first evidence that charm mesons can change into their antiparticle and back again.

For more than 10 years, scientists have known that charm mesons, subatomic particles that contain a quark and an antiquark, can travel as a mixture of their particle and antiparticle states, a phenomenon called mixing. However, this new result shows for the first time that they can oscillate between the two states.

Armed with this new evidence, scientists can try to tackle some of the biggest questions in physics around how particles behave outside of the Standard Model. One being, whether these transitions are caused by unknown particles not predicted by the guiding theory.

Large Hadron Collider Tunnel

The Large Hadron Collider tunnel. Credit: CERN

The research, submitted today to Physical Review Letters and available on arXiv, received funding from the Science and Technology Facilities Council (STFC).

Being one and the other

In the strange world of quantum physics, the charm meson can be itself and its antiparticle at once. This state, known as quantum superposition, results in two particles each with their own mass – a heavier and lighter version of the particle. This superposition allows the charm meson to oscillate into its antiparticle and back again.

Using data collected during the second run of the Large Hadron Collider, researchers from the University of Oxford measured a difference in mass between the two particles of 0.00000000000000000000000000000000000001 grams – or in scientific notation 1×10-38g. A measurement of this precision and certainty is only possible when the phenomenon is observed many times, and this is only possible due so many charm mesons being produced in LHC collisions.

As the measurement is extremely precise, the research team ensured the analysis method was even more so. To do this, the team used a novel technique originally developed by colleagues at the University of Warwick.

LHCb Experiment at CERN

The LHCb experiment at CERN. Credit: CERN

There are only four types of particle in the Standard Model, the theory that explains particle physics, that can turn into their antiparticle. The mixing phenomenon was first observed in Strange mesons in the 1960s and in beauty mesons in the 1980s. Until now, the only other one of the four particles that has been seen to oscillate this way is the strange-beauty meson, a measurement made in 2006.

A rare phenomenon

Professor Guy Wilkinson at University of Oxford, whose group contributed to the analysis, said:

“What makes this discovery of oscillation in the charm meson particle so impressive is that, unlike the beauty mesons, the oscillation is very slow and therefore extremely difficult to measure within the time that it takes the meson to decay. This result shows the oscillations are so slow that the vast majority of particles will decay before they have a chance to oscillate. However, we are able to confirm this as a discovery because LHCb has collected so much data.”

Professor Tim Gershon at University of Warwick, developer of the analytical technique used to make the measurement, said:

“Charm meson particles are produced in proton–proton collisions and they travel on average only a few millimeters before transforming, or decaying, into other particles. By comparing the charm meson particles that decay after traveling a short distance with those that travel a little further, we have been able to measure the key quantity that controls the speed of the charm meson oscillation into anti-charm meson – the difference in mass between the heavier and lighter versions of charm meson.”

A new door opens for physics exploration

This discovery of charm meson oscillation opens up a new and exciting phase of physics exploration; researchers now want to understand the oscillation process itself, potentially a major step forward in solving the mystery of matter-antimatter asymmetry. A key area to explore is whether the rate of particle-antiparticle transitions is the same as that of antiparticle-particle transitions, and specifically whether the transitions are influenced/caused by unknown particles not predicted by the Standard Model.

Dr. Mark Williams at University of Edinburgh, who convened the LHCb Charm Physics Group within which the research was performed, said:

“Tiny measurements like this can tell you big things about the Universe that you didn’t expect.”

The result, 1×10-38g, crosses the ‘five sigma’ level of statistical significance that is required to claim a discovery in particle physics.

Reference: “Observation of the mass difference between neutral charm-meson eigenstates” by LHCb collaboration: R. Aaij, C. Abellán Beteta, T. Ackernley, B. Adeva, M. Adinolfi, H. Afsharnia, C.A. Aidala, S. Aiola, Z. Ajaltouni, S. Akar, J. Albrecht, F. Alessio, M. Alexander, A. Alfonso Albero, Z. Aliouche, G. Alkhazov, P. Alvarez Cartelle, S. Amato, Y. Amhis, L. An, L. Anderlini, A. Andreianov, M. Andreotti, F. Archilli, A. Artamonov, M. Artuso, K. Arzymatov, E. Aslanides, M. Atzeni, B. Audurier, S. Bachmann, M. Bachmayer, J.J. Back, P. Baladron Rodriguez, V. Balagura, W. Baldini, J. Baptista Leite, R.J. Barlow, S. Barsuk, W. Barter, M. Bartolini, F. Baryshnikov, J.M. Basels, G. Bassi, B. Batsukh, A. Battig, A. Bay, M. Becker, F. Bedeschi, I. Bediaga, A. Beiter, V. Belavin, S. Belin, V. Bellee, K. Belous, I. Belov, I. Belyaev, G. Bencivenni, E. Ben-Haim, A. Berezhnoy, R. Bernet, D. Berninghoff, H.C. Bernstein, C. Bertella, A. Bertolin, C. Betancourt, F. Betti, Ia. Bezshyiko, S. Bhasin, J. Bhom, L. Bian, M.S. Bieker, S. Bifani, P. Billoir, M. Birch, F.C.R. Bishop, A. Bitadze, A. Bizzeti, M. Bjørn, M.P. Blago, T. Blake, F. Blanc, S. Blusk, D. Bobulska, J.A. Boelhauve, O. Boente Garcia, T. Boettcher, A. Boldyrev, A. Bondar, N. Bondar, S. Borghi, M. Borisyak, M. Borsato, J.T. Borsuk, S.A. Bouchiba, T.J.V. Bowcock, A. Boyer, C. Bozzi, M.J. Bradley et al., Submitted, Physical Review Letters.
arXiv: 2106.03744

New Insight Into Biosynthesis: How Cyanobacteria Evolve Their Photosynthetic Machinery
New Insight Into Biosynthesis: How Cyanobacteria Evolve Their Photosynthetic Machinery
Cyanobacterial Thylakoid Membrane

Illustration of the cyanobacterial thylakoid membrane. Credit: Luning Liu et al.

A new study conducted by the researchers at the University of Liverpool reveals how the ancient photosynthetic organisms – cyanobacteria – evolve their photosynthetic machinery and organize their photosynthetic membrane architecture for the efficient capture of solar light and energy transduction.

Oxygenic photosynthesis, carried out by plants, algae, and cyanobacteria, produces energy and oxygen for life on Earth and is arguably the most important biological process. Cyanobacteria are among the earliest phototrophs that can perform oxygenic photosynthesis and make significant contributions to the Earth’s atmosphere and primary production.

Light-dependent photosynthetic reactions are performed by a set of photosynthetic complexes and molecules accommodated in the specialized cell membranes, called thylakoid membranes. While some studies have reported the structures of photosynthetic complexes and how they perform photosynthesis, researchers still had little understanding about how native thylakoid membranes are built and further developed to become a functional entity in cyanobacterial cells.

The research team, led by Professor Luning Liu from the University’s Institute of Systems, Molecular and Integrative Biology, developed a method to control the formation of thylakoid membranes during cell growth and used state-of-the-art proteomics and microscopic imaging to characterize the stepwise maturation process of thylakoid membranes. Their results are published in the journal Nature Communications.

“We are really thrilled about the findings,” said Professor Liu. “Our research draws a picture about how phototrophs generate and then develop their photosynthetic membranes, and how different photosynthetic components are incorporated and located in the thylakoid membrane to perform efficient photosynthesis – a long-standing question in this field.”

The first author of the study, Dr. Tuomas Huokko, said: “We find that the newly synthesized thylakoid membranes emerge between the peripheral cell membrane, termed the plasma membrane, and the pre-existing thylakoid layer. By detecting the protein compositions and photosynthetic activities during the thylakoid development process, we also find that photosynthetic proteins are well controlled in space and time to evolve and assemble into the thylakoid membranes.”

The new research shows that the cyanobacterial thylakoid membrane is a truly dynamic biological system and can adapt rapidly to environmental changes during bacterial growth. In thylakoids, photosynthetic proteins can diffuse from one position to another and form functional “protein islands” to work together for high photosynthetic efficiency.

“Since cyanobacteria perform plant-like photosynthesis, the knowledge gained from cyanobacteria thylakoid membranes can be extended to plant thylakoids,” added Professor Liu. “Understanding how the natural photosynthetic machinery is evolved and regulated in phototrophs is vital for tuning and enhancing photosynthetic performance. This offers solutions to sustainably improve crop plant photosynthesis and yields, in the context of climate change and growing population. Our research may also benefit the bioinspired design and generation of artificial photosynthetic devices for efficient electron transfer and bioenergy production.”

Reference: “Probing the biogenesis pathway and dynamics of thylakoid membranes” by Tuomas Huokko, Tao Ni, Gregory F. Dykes, Deborah M. Simpson, Philip Brownridge, Fabian D. Conradi, Robert J. Beynon, Peter J. Nixon, Conrad W. Mullineaux, Peijun Zhang and Lu-Ning Liu, 9 June 2021, Nature Communications.
DOI: 10.1038/s41467-021-23680-1

The research was carried out in collaboration with the University’s Centre for Proteome Research, Centre for Cell Imaging, and Biomedical Electron Microscopy Unit, as well as with researchers from University of Oxford, Queen Mary University of London, and Imperial College London. The research was funded by the BBSRC, Royal Society, Wellcome Trust, and Leverhulme Trust.

“Delicious Chemistry” – How a PhD Student Learned To Use His Chemistry Skills in Cooking Dishes
“Delicious Chemistry” – How a PhD Student Learned To Use His Chemistry Skills in Cooking Dishes
Tshepo Dipheko

PhD student Tshepo Dipheko from South Africa. Credit: RUDN University

What sets chemistry apart from other natural sciences is the ability to get creative and find amazing solutions to long known problems.

A PhD student Tshepo Dipheko from South Africa, instills love for chemistry into people. He doesn’t show it too much, just unwittingly reminds that chemistry surrounds a person absolutely everywhere — it’s in the body, brain, clothing, food and household items. According to the student, it’s impossible to remain indifferent because “Chemistry is everything. We encounter it when drinking coffee or tea, holding a paper cup in our hands, or setting off fireworks on New Year’s Eve”.

Tshepo fell in love with chemistry at school: he was struck not only by the results of colorful chemical reactions, for example, “Pharaoh’s serpent”, but also by the structure of the periodic table and clear chemical equations. Thanks to chemistry, life was ordered by formulas, elements and reactions.

The passion for order and accurate measurements of powders and liquids has moved smoothly to the kitchen. “I’m not the best cook you’ll meet on your way, but I prepare everything with my heart”, says Tshepo. It seems that the student approaches cooking in the same way as preparing the outcome of a reaction in the laboratory: everything is effective, correctly conducted, and the volumes of substances are precisely verified in a scientific way. But he frankly says that “there is no smell of creativity here”. In cooking, you need to respect the principle of all serious scientists in white coats: mix substances following clear instructions without unnecessary amateur activity.

“South Africa doesn’t have enough specialists in chemistry. — Says Tshepo. — Every year we need more and more people with these skills to develop the country’s chemical industry.”

After graduating, Tshepo is waiting for work in the chemical industry and postdoctoral research that opens up the widest opportunities for future scientific activity.

Innovative New Nanotechnology Will Enable “Healthy” Electric Current Production Inside the Human Body
The innovative material that creates green energy through mechanical force.

Innovative New Nanotechnology Will Enable “Healthy” Electric Current Production Inside the Human Body

A new nanotechnology development by an international research team led by Tel Aviv University researchers will make it possible to generate electric currents and voltage within the human body through the activation of various organs (mechanical force). The researchers explain that the development involves a new and very strong biological material, similar to collagen, which is non-toxic and causes no harm to the body’s tissues. The researchers believe that this new nanotechnology has many potential applications in medicine, including harvesting clean energy to operate devices implanted in the body (such as pacemakers) through the body’s natural movements, eliminating the need for batteries.

The study was led by Prof. Ehud Gazit of the Shmunis School of Biomedicine and Cancer Research at the Wise Faculty of Life Sciences, the Department of Materials Science and Engineering at the Fleischman Faculty of Engineering, and the Center for Nanoscience and Nanotechnology, along with his lab team, Dr. Santu Bera and Dr. Wei Ji.

Also taking part in the study were researchers from the Weizmann Institute and a number of research institutes in Ireland, China, and Australia. As a result of their findings, the researchers received two ERC-POC grants aimed at using the scientific research from the ERC grant that Gazit had previously won for applied technology. The research was published in the prestigious journal Nature Communications.

Ehud Gazit

Prof. Ehud Gazit. Credit: Tel Aviv University

Prof. Gazit, who is also Founding Director of the BLAVATNIK CENTER for Drug Discovery, explains: “Collagen is the most prevalent protein in the human body, constituting about 30% of all of the proteins in our body. It is a biological material with a helical structure and a variety of important physical properties, such as mechanical strength and flexibility, which are useful in many applications. However, because the collagen molecule itself is large and complex, researchers have long been looking for a minimalistic, short and simple molecule that is based on collagen and exhibits similar properties.

“About a year and a half ago, in the journal Nature Materials, our group published a study in which we used nanotechnological means to engineer a new biological material that meets these requirements. It is a tripeptide — a very short molecule called Hyp-Phe-Phe consisting of only three amino acids — capable of a simple process of self-assembly of forming a collagen-like helical structure that is flexible and boasts a strength similar to that of the metal titanium. In the present study, we sought to examine whether the new material we developed bears another feature that characterizes collagen — piezoelectricity. Piezoelectricity is the ability of a material to generate electric currents and voltage as a result of the application of mechanical force, or vice versa, to create a mechanical force as the result of exposure to an electric field.”

In the study, the researchers created nanometric structures of the engineered material, and with the help of advanced nanotechnology tools, applied mechanical pressure on them. The experiment revealed that the material does indeed produce electric currents and voltage as a result of the pressure. Moreover, tiny structures of only hundreds of nanometers demonstrated one of the highest levels of piezoelectric ability ever discovered, comparable or superior to that of the piezoelectric materials commonly found in today’s market (most of which contain lead and are therefore not suitable for medical applications).

According to the researchers, the discovery of piezoelectricity of this magnitude in a nanometric material is of great significance, as it demonstrates the ability of the engineered material to serve as a kind of tiny motor for very small devices. Next, the researchers plan to apply crystallography and computational quantum mechanical methods (density functional theory) in order to gain an in-depth understanding of the material’s piezoelectric behavior and thereby enable the accurate engineering of crystals for the building of biomedical devices.

Prof. Gazit adds: “Most of the piezoelectric materials that we know of today are toxic lead-based materials, or polymers, meaning they are not environmentally and human body-friendly. Our new material, however, is completely biological, and therefore suitable for uses within the body. For example, a device made from this material may replace a battery that supplies energy to implants like pacemakers, though it should be replaced from time to time. Body movements — like heartbeats, jaw movements, bowel movements, or any other movement that occurs in the body on a regular basis — will charge the device with electricity, which will continuously activate the implant.”

Now, as part of their continuing research, the researchers are seeking to understand the molecular mechanisms of the engineered material with the goal of realizing its immense potential and turning this scientific discovery into applied technology. At this stage, the focus is on the development of medical devices, but Prof. Gazit emphasizes that “environmentally friendly piezoelectric materials, such as the one we have developed, have tremendous potential in a wide range of areas because they produce green energy using mechanical force that is being used anyway. For example, a car driving down the street can turn on the streetlights. These materials may also replace lead-containing piezoelectric materials that are currently in widespread use, but that raise concerns about the leakage of toxic metal into the environment.”

References:

“Molecular engineering of piezoelectricity in collagen-mimicking peptide assemblies” by Santu Bera, Sarah Guerin, Hui Yuan, Joseph O’Donnell, Nicholas P. Reynolds, Oguzhan Maraba, Wei Ji, Linda J. W. Shimon, Pierre-Andre Cazade, Syed A. M. Tofail, Damien Thompson, Rusen Yang and Ehud Gazit, 11 May 2021, Nature Communications.
DOI: 10.1038/s41467-021-22895-6

“Rigid helical-like assemblies from a self-aggregating tripeptide” by Santu Bera, Sudipta Mondal, Bin Xue, Linda J. W. Shimon, Yi Cao and Ehud Gazit, 15 April 2019, Nature Materials.
DOI: 10.1038/s41563-019-0343-2

Rosatom becomes a superconductor supplier to CERN
Rosatom becomes a superconductor supplier to CERN

The European Organization for Nuclear Research (CERN) has successfully completed acceptance tests for Russian niobium-tin superconductors manufactured under the Future Circular Collider (FCC) project, which is to replace CERN’s Large Hadron Collider in Switzerland.

The construction of the superconducting fibers and the technology of their production were developed at the High-Tech Research Institute for Inorganic Materials “Academician A. Bochvar” in Moscow, and the pilot batch of wires with a total length of 50 km is the work of the Chepetsk Mechanical Plant in Glazov.

The activities are under an agreement between CERN and the fuel company of Rosatom – TVEL.

As a result of the successful tests of the production, TVEL became a qualified supplier of superconductors for CERN’s programs for the development of particle accelerators.

The magnetic system is one of the key elements of the Future Circular Collider. Its colossal size – the length of the circle is about 100 kilometers, requires the supply of a significant amount of superconducting filaments, which can be produced only with the joint efforts of countries with such technology – Russia, USA, South Korea, Japan and China.

The future circular collider in Switzerland is a key international research project that should make it easier for scientists to take fundamental research in the field of particle physics to a new level. The new particle accelerator will help to understand the nature of dark matter, the emergence of asymmetry antimatter – matter in the universe and to solve other issues beyond the so-called standard model of modern physics.

Unlike the superconductors created at Rosatom for the international thermonuclear reactor ITER, which is being built in France, the threads for the Future Circular Collider project were obtained by a method that allows the construction of a conductor with a significantly higher critical current density – necessary condition for the creation of modern magnetic systems of accelerators and installations in the field of high energy physics.

Applied superconductivity is one of the strategic directions in the development of non-nuclear technologies in the fuel company of Rosatom – TVEL. Low-temperature superconducting materials are indispensable for the creation of modern medical equipment (magnetic resonance imaging), as well as in analytical equipment with high and ultra-high resolution, such as nuclear magnetic resonance spectrometers.

“We are proud that the superconductors produced by us will help implement the largest Russian and international research projects in the field of high energy physics and fusion – the Future Circular Collider, the International Thermonuclear Experimental Reactor, the NICA accelerator complex and the study of ions and antiprotons FAIR. We are also developing niobium-titanium conductors for the Future Collider and especially pure resonator niobium, which will be needed for the manufacture of its acceleration systems, “said the president of TVEL Natalia Nikipelova.

Energy Saving Electronics Breakthrough – Paving Way for a Carbon-Neutral Society
Energy Saving Electronics Breakthrough – Paving Way for a Carbon-Neutral Society

Quantifying electric fields in semiconductor devices: The schematic shows electric field distribution in the channel of a GaN transistor; laser beams highlight the second harmonic generation (SHG) nature of the technique. Credit: Yuke Cao

Researchers at the University of Bristol have discovered a method that will allow for faster communication systems and better energy-saving electronics.

The breakthrough was made by establishing how to remotely measure the electric field inside a semiconductor device for the first time. A semiconductor is a material, such as Silicon, which can be used in electronic devices to control electric current.

Now, in this new study, published on June 21, 2021, in Nature Electronics, scientists outline how to precisely quantify this electric field, meaning next-generation power and radio frequency electronic devices can be developed which have the potential to be faster, and more reliable, as well as more energy-efficient.

Semiconductor device design can be trial and error, though more commonly it is based on a device simulation which then provides the basis for the manufacture of the semiconductor devices for real-life applications. When these are new and emerging semiconductor materials, it has often been unknown how accurate and correct these simulations actually are.

Prof Martin Kuball of the University of Bristol’s School of Physics said: “Semiconductors can be made to conduct positive or negative charges and can therefore be designed to modulate and manipulate current. However, these semiconductor devices do not stop with Silicon, there are many others including Gallium Nitride (used in blue LEDs for example). These semiconductor devices, which for instance convert an AC current from a power line into a DC current, result in a loss of energy as waste heat — look at your laptop for example, the power brick is getting warm or even hot. If we could improve efficiency and reduce this waste heat, we will save energy.

“One applies a voltage to an electronic device, and as a result there is an output current used in the application. Inside this electronic device is an electric field which determines how this device works and how long it will be operational and how good its operation is. No one could actually measure this electric field, so fundamental to the device operation. One always relied on simulation which is hard to trust unless you can actually test its accuracy.”

To make good performance and long-lasting electronic devices out of these new materials it is important that researchers find the optimal design, where electric fields do not exceed the critical value which would result in their degradation or failure. Experts plan to use newly emerging materials such as Gallium Nitride and Gallium Oxide rather than Silicon, allowing operation at higher frequency and at higher voltages, respectively, so that new circuits are possible which reduce energy loss. This work published by the University of Bristol group will provide an optical tool to enable the direct measurement of electric field within these new devices. This will underpin future efficient power electronics in applications such as solar or wind turbine stations feeding into the national grid, electric cars, trains, and planes. Reduced energy loss means societies do not need to produce as much energy in the first place.

Prof Kuball said: “Considering that these devices are operated at higher voltages, this also means electric fields in the devices are higher and this, in turn, means they can fail easier. The new technique we have developed enables us to quantify electric fields within the devices, allowing accurate calibration of the device simulations that in turn design the electronic devices so the electric fields do not exceed critical limits and fail.”

Prof Kuball and his team plan to work with key industrial stakeholders to apply the technique to advance their device technology. Within an academic context, they will engage with partners within the $12M US Department of Energy (DOE) ULTRA center, they are partnered in, to use this technique to make ultra-wide bandgap device technology a reality, allowing energy savings in excess of 10% across the globe.

“This development helps the UK and the world to develop energy-saving semiconductor devices, which is a step towards a carbon-neutral society,” he added.

The technique was developed as part of an Engineering and Physical Sciences Research Council (EPSRC) project.

Reference: “Electric field mapping of wide-bandgap semiconductor devices at a submicrometre resolution” by Yuke Cao, James W. Pomeroy, Michael J. Uren, Feiyuan Yang and Martin Kuball, 21 June 2021, Nature Electronics.
DOI: 10.1038/s41928-021-00599-5

Major Ocean-Observing Satellite – Copernicus Sentinel-6 – Goes Live!
Major Ocean-Observing Satellite – Copernicus Sentinel-6 – Goes Live!
Copernicus Sentinel-6 Radar Altimeter

The Copernicus Sentinel-6 Poseidon-4 dual-frequency (C- and Ku-band) radar altimeter uses an innovative interleaved mode that has improved performance compared to previous satellite altimeter designs. Credit: ESA/ATG medialab

Following liftoff last November and more than six months spent carefully calibrating the most advanced mission dedicated to measuring sea-level rise, Copernicus Sentinel-6 Michael Freilich is now operational – meaning that its data are available to climate researchers, ocean-weather forecasts and other data users.

Sentinel-6 is one of the European Union’s family of Copernicus missions but its implementation is the result of an exceptional cooperation between the European Commission ESA, Eumetsat, NASA and NOAA, with contribution from the CNES French space agency. The mission comprises two identical satellites launched five years apart: Copernicus Sentinel-6 Michael Freilich launched on November 21, 2020 and Copernicus Sentinel-6B, which will be launched in 2025.

Sea-level rise is a key indicator of climate change so accurately monitoring the changing height of the sea surface over decades is essential for climate science, for policy-making and, ultimately, for protecting the lives of those in low-lying regions at risk.

Improvement of Sentinel-6 Significant Wave Height

Improvement of Sentinel-6 significant wave height with respect to Jason-3. Over the last six months Copernicus Sentinel-6 Michael Freilich has been orbiting in tandem with current altimetry reference mission, Jason-3, so that the satellites have the same ‘view’ of the ocean. Establishing the differences between Sentinel-6 and Jason-3 is important if stability in the sea-level rise time series from satellite altimetry is to be maintained. The plot shows the standard deviation of the significant wave height over ocean for different significant wave heights for a full 10-day cycle of Copernicus Sentinel-6 and Jason-3 using low-resolution mode data. Significant wave height is defined as the upper-third of wave height for a given sample of sea state. The plot highlights the improved (lower) significant wave height noise from Copernicus Sentinel-6 compared to Jason-3. The background bar chart shows the percentage number of data points as a function of the significant wave height (0.1 m binning). Credit: CLS

Using the latest radar altimetry technology, developed by ESA, this new mission will advance the long-term record of sea-surface height measurements that began in 1992 by the French–US Topex-Poseidon satellite and then the Jason series of satellite missions.

Sentinel-6 Michael Freilich will soon pick up the baton and extend this dataset – a dataset that is the ‘gold standard’ for climate studies.

Julia Figa Saldana, ocean altimetry program manager at Eumetsat, said, “We have been flying Copernicus Sentinel-6 Michael Freilich on the same orbit as the current altimetry reference mission, Jason-3, for the past six months so that the satellites have the same ‘view’ of the ocean.

“Experts from around the world have collaborated intensively over the past six months, despite the workplace constraints caused by the coronavirus pandemic, to cross-calibrate their data to ensure accuracy. The data are now being processed at Eumetsat in Darmstadt, Germany, from where the satellite is also being controlled, and from where the data are released to ocean and weather forecasting users.”

Improvement of Sentinel-6 Range Noise

Improvement of Sentinel-6 range noise with respect to Jason-3. Over the last six months Copernicus Sentinel-6 Michael Freilich has been orbiting in tandem with current altimetry reference mission, Jason-3, so that the satellites have the same ‘view’ of the ocean. Establishing the differences between Sentinel-6 and Jason-3 is important if stability in the sea-level rise time series from satellite altimetry is to be maintained. The plot shows the standard deviation of re-tracked altimeter range over ocean for different significant wave heights for a full 10-day cycle of Sentinel-6 and Jason-3 using low resolution mode data. Significant wave height is defined as the upper-third of wave height for a given sample of sea state. The plot highlights the improved (lower) range noise from Sentinel-6 compared to Jason-3. The background bar chart shows the percentage number of data points as a function of the significant wave height (0.1 meter binning). Credit: CLS

Craig Donlon, ESA’s Sentinel-6 mission scientist, said, “Building on a long line of European heritage dual-frequency altimeter missions, Sentinel-6’s Poseidon-4 altimeter was designed to bring new high-resolution Ku- band synthetic aperture radar measurements into the altimetry reference time series.

“To manage the transition from Jason-3’s lower resolution measurements to Sentinel-6’s high-resolution products with confidence, Poseidon-4 acquires both conventional low-resolution measurements simultaneously with high-resolution synthetic aperture radar measurements. The high-resolution products will be available later this year. We are really pleased to see that the data from Sentinel-6 show great performance based on validation by independent measurements on the ground.”

The Permanent Facility for Altimetry Calibration on the island of Crete in Greece is home to a transponder (CND1) that is key to this validation, ensuring that Sentinel-6’s low-resolution data are the same as that of Jason-3. The transponder receives the Sentinel-6 radar signal, which is amplified and transmitted back to the satellite providing a well-characterised external calibration source.

Dr. Donlon explained, “Measurements from the CDN1 transponder show that the absolute difference between measurements from Sentinel-6 and Jason-3 is less than 2 mm, which is remarkable for two independent satellites operating at an altitude of 1330 km.

“Establishing the differences between Sentinel-6 and Jason-3 is important if stability in the sea-level rise time series from satellite altimetry is to be maintained with low uncertainties.”

Nadya Vinogradova-Shiffer, NASA program scientist for Sentinel-6 Michael Freilich, said, “The release of this data marks the beginning of a new Sentinel era of ocean satellite altimetry for the NASA science community and the international Ocean Surface Topography Science Team, who are excited, ready, and eager to expand nearly three decades of discoveries in ocean and climate science.” 

As of 22 June 2021, two data streams of low-resolution sea-surface height are available to the public. The first is available within hours of Sentinel-6’s Poseidon-4 altimeter collecting it and the second stream comes a couple of days after it is acquired. The difference in when the products become available balances accuracy with delivery timeliness for tasks like forecasting the weather or helping to monitor the formation of hurricanes.

The third data stream, which is slated for distribution later this year or early next year, will be the most accurate.

European Commission’s Director for Space, Matthias Petschke, said, “Preparing our resilience and adapting to sea-level rise as an effect of climate change is a top priority in the decades to come, as part of the European Green Deal. From scenarios published in 2015 for Cop21, we can observe that the sea-level rise phenomenon is accelerating faster than expected. It will affect our EU coasts in decades to come, this is a certainty, and this is critical for some European countries.

“Jason missions, in the past, and now Copernicus Sentinel-6, are unique solutions to give us accurate information on this trend, observing and monitoring it accurately, as well revealing this alarming acceleration of the rise.”

Nano Optics Breakthrough: Researchers Observe Sound-Light Pulses in 2D Materials for the First Time
Nano Optics Breakthrough: Researchers Observe Sound-Light Pulses in 2D Materials for the First Time
Yuval Adiv, Yaniv Kurman, Ido Kaminer, Raphael Dahan and Kangpeng Wang

Research team, L-R: Yuval Adiv, Yaniv Kurman, Professor Ido Kaminer, Raphael Dahan and Dr. Kangpeng Wang. Credit: Technion – Israel Institute of Technology

A Spatiotemporal Symphony of Light

Using an ultrafast transmission electron microscope, researchers from the Technion – Israel Institute of Technology have, for the first time, recorded the propagation of combined sound and light waves in atomically thin materials. 

The experiments were performed in the Robert and Ruth Magid Electron Beam Quantum Dynamics Laboratory headed by Professor Ido Kaminer, of the Andrew and Erna Viterbi Faculty of Electrical & Computer Engineering and the Solid State Institute. 

Single-layer materials, alternatively known as 2D materials, are in themselves novel materials, solids consisting of a single layer of atoms. Graphene, the first 2D material discovered, was isolated for the first time in 2004, an achievement that garnered the 2010 Nobel Prize. Now, for the first time, Technion scientists show how pulses of light move inside these materials. Their findings, “Spatiotemporal Imaging of 2D Polariton Wavepacket Dynamics Using Free Electrons,” were published in Science following great interest by many scientists.

Sound-Light Wave in 2D Material

Illustration of a Sound-Light wave in 2D materials and its measurement using free electrons. Credit: Technion – Israel Institute of Technology

Light moves through space at 300,000 km/s. Moving through water or through glass, it slows down by a fraction. But when moving through certain few-layers solids, light slows down almost a thousand-fold. This occurs because the light makes the atoms of these special materials vibrate to create sound waves (also called phonons), and these atomic sound waves create light when they vibrate. Thus, the pulse is actually a tightly bound combination of sound and light, called “phonon-polariton.” Lit up, the material “sings.”

The scientists shone pulses of light along the edge of a 2D material, producing in the material the hybrid sound-light waves. Not only were they able to record these waves, but they also found the pulses can spontaneously speed up and slow down. Surprisingly, the waves even split into two separate pulses, moving at different speeds.

The experiment was conducted using an ultrafast transmission electron microscope (UTEM). Contrary to optical microscopes and scanning electron microscopes, here particles pass through the sample and then are received by a detector. This process allowed the researchers to track the sound-light wave in unprecedented resolution, both in space and in time. The time resolution is 50 femtosecond – 50X10-15 seconds – the number of frames per second is similar to the number of seconds in a million years.

“The hybrid wave moves inside the material, so you cannot observe it using a regular optical microscope,” Kurman explained. “Most measurements of light in 2D materials are based on microscopy techniques that use needle-like objects that scan over the surface point-by-point, but every such needle-contact disturbs the movement of the wave we try to image. In contrast, our new technique can image the motion of light without disturbing it. Our results could not have been achieved using existing methods. So, in addition to our scientific findings, we present a previously unseen measurement technique that will be relevant to many more scientific discoveries.”

This study was born in the height of the COVID-19 epidemic. In the months of lockdown, with the universities closed, Yaniv Kurman, a graduate student in Prof. Kaminer’s lab, sat at home and made the mathematical calculations predicting how light pulses should behave in 2D materials and how they could be measured. Meanwhile, Raphael Dahan, another student in the same lab, realized how to focus infrared pulses into the group’s electron microscope and made the necessary upgrades to accomplish that. Once the lockdown was over, the group was able to prove Kurman’s theory and even reveal additional phenomena that they had not expected. 

While this is a fundamental science study, the scientists expect it to have multiple research and industry applications. “We can use the system to study different physical phenomena that are not otherwise accessible,” said Prof. Kaminer. “We are planning experiments that will measure vortices of light, experiments in Chaos Theory, and simulations of phenomena that occur near black holes. Moreover, our findings may permit the production of atomically thin fiber-optic “cables”, which could be placed within electrical circuits and transmit data without overheating the system – a task that is currently facing considerable challenges due to circuit minimization.”

Yaniv Kurman and Ido Kaminer

L-R: Yaniv Kurman and Professor Ido Kaminer. Credit: Technion – Israel Institute of Technology

The team’s work initiates the research of light pulses inside a novel set of materials, broadens the capabilities of electron microscopes, and promotes the possibility of optical communication through atomically thin layers.

“I was thrilled by these findings,” said Professor Harald Giessen, from the University of Stuttgart, who was not a part of this research. “This presents a real breakthrough in ultrafast nano-optics, and represents state of the art and the leading edge of the scientific frontier. The observation in real space and in real-time is beautiful and has, to my knowledge, not been demonstrated before.”

Another prominent scientist not involved with the study, John Joannopoulos from the Massachusetts Institute of Technology, added that “The key in this accomplishment is in the clever design and development of an experimental system. This work by Ido Kaminer and his group and colleagues is a critical step forward. It is of great interest both scientifically and technologically, and is of critical importance to the field.”

Prof. Kaminer is also affiliated with the Helen Diller Quantum Center and the Russell Berrie Nanotechnology Institute. The study was spearheaded by Ph.D. students Yaniv Kurman and Raphael Dahan. Other members of the research team were Dr. Kangpeng Wang, Michael Yannai, Yuval Adiv, and Ori Reinhardt. The research was based on an international collaboration with the groups of Prof. James Edgar (Kansas State University), Prof. Mathieu Kociak (Université Paris Sud), and Prof. Frank Koppens (ICFO, The Barcelona Institute of Science and Technology). 

Reference: “Spatiotemporal imaging of 2D polariton wave packet dynamics using free electrons” by Yaniv Kurman, Raphael Dahan, Hanan Herzig Sheinfux, Kangpeng Wang, Michael Yannai, Yuval Adiv, Ori Reinhardt, Luiz H. G. Tizei, Steffi Y. Woo, Jiahan Li, James H. Edgar, Mathieu Kociak, Frank H. L. Koppens and Ido Kaminer, 11 June 2021, Science.
DOI: 10.1126/science.abg9015

Drinking Any Type of Coffee (Even Decaf) Associated With Reduced Risk of Chronic Liver Disease
Drinking Any Type of Coffee (Even Decaf) Associated With Reduced Risk of Chronic Liver Disease

Pouring Cup of Coffee

Drinking coffee that is caffeinated (ground or instant) or decaffeinated is associated with a reduced risk of developing chronic liver disease and related liver conditions, according to a study published in the open access journal BMC Public Health.

Researchers at the Universities of Southampton and Edinburgh, UK, found that drinking any type of coffee was associated with a reduced risk of developing and dying from chronic liver disease compared to not drinking coffee, with the benefit peaking at three to four cups per day.

The authors studied UK Biobank data on 495,585 participants with known coffee consumption, who were followed over a median of 10.7 years to monitor who developed chronic liver disease and related liver conditions.

Of all participants included in the study, 78% (384,818) consumed ground or instant caffeinated or decaffeinated coffee, while 22% (109,767) did not drink any type of coffee. During the study period, there were 3,600 cases of chronic liver disease, including 301 deaths. Additionally, there were 5,439 cases of chronic liver disease or steatosis (a build of up fat in the liver also known as fatty liver disease), and 184 cases of Hepatocellular carcinoma, a type of liver cancer.

Compared to non-coffee drinkers, coffee-drinkers had a 21% reduced risk of chronic liver disease, a 20% reduced risk of chronic or fatty liver disease, and a 49% reduced risk of death from chronic liver disease. The maximum benefit was seen in the group who drank ground coffee, which contains high levels of the ingredients Kahweol and cafestol, which have been shown to be beneficial against chronic liver disease in animals.

Instant coffee, which has low levels of Kahweol and cafestol was also associated with a reduced the risk of chronic liver disease. While the reduction in risk was smaller than that associated with ground coffee, the finding may suggest that other ingredients, or potentially a combination of ingredients, may be beneficial.

Dr. Oliver Kennedy, the lead author said: “Coffee is widely accessible and the benefits we see from our study may mean it could offer a potential preventative treatment for chronic liver disease. This would be especially valuable in countries with lower income and worse access to healthcare and where the burden of chronic liver disease is highest.”

The authors caution that, as coffee consumption was only reported when participants first enrolled in the study, the study does not account for any changes in the amount or type of coffee they consumed over the 10.7-year study period. As participants were predominantly white and from a higher socio-economic background, the findings may be difficult to generalize to other countries and populations.

The authors suggest that future research could test the relationship between coffee and liver disease with more rigorous control of the amount of coffee consumed. They also propose validating their findings in more diverse groups of participants.

Reference: “All coffee types decrease the risk of adverse clinical outcomes in chronic liver disease: a UK Biobank study” 21 June 2021, BMC Public Health.
DOI: 10.1186/s12889-021-10991-7

Anti-aging Protein in Red Blood Cells Helps Prevent Mental Decline, Poor Memory and Hearing Deficits
Anti-aging Protein in Red Blood Cells Helps Prevent Mental Decline, Poor Memory and Hearing Deficits

Blood Cells Vein

Mice lacking ADORA2B in their blood exhibit accelerated aging, including poor memory and hearing deficits.

Research conducted by Qiang et al has discovered a link between a protein in red blood cells and age-related decline in cognitive performance. Published in the open access journal PLOS Biology on June 17th, 2021, the study shows that depleting mouse blood of the protein ADORA2B leads to faster declines in memory, delays in auditory processing, and increased inflammation in the brain.

As life expectancies around the world increase, so are the number of people who will experience age-related cognitive decline. Because the amount of oxygen in the blood also declines with age, the team hypothesized that aging in the brain might be naturally held at bay by adenosine receptor A2B (ADORA2B), a protein on the membrane of red blood cells which is known to help release oxygen from the blood cells so it can be used by the body. To test this idea, they created mice that lacked ADORA2B in their blood and compared behavioral and physiological measures with control mice.

The team found that as the mice got older, the hallmarks of cognitive decline — poor memory, hearing deficits, and inflammatory responses in the brain — were all greater in the mice lacking ADORA2B than in the control mice. Additionally, after experiencing a period of oxygen deprivation, the behavioral and physiological effects on young mice without ADORA2B were much greater than those on normal young mice.

Thus, aging in the brain is naturally reduced by ADORA2B, which helps get oxygen to the brain when needed. Further testing will be needed to determine whether ADORA2B levels naturally decline with age and whether treatment with drugs that activate ADORA2B can reduce cognitive decline in normal mice.

Dr. Xia, the leader of the study, commented “Red blood cells have an irreplaceable function to deliver oxygen to maintain bioenergetics of every single cell within our body. However, their function in age-related cognition and hearing function remains largely unknown. Our findings reveal that the red blood cell ADORA2B signaling cascade combats early onset of age-related decline in cognition, memory and hearing by promoting oxygen delivery in mice and immediately highlight multiple new rejuvenating targets.”

Reference: “Erythrocyte adenosine A2B receptor prevents cognitive and auditory dysfunction by promoting hypoxic and metabolic reprogramming” by Qingfen Qiang, Jeanne M. Manalo, Hong Sun, Yujin Zhang, Anren Song, Alexander Q. Wen, Y. Edward Wen, Changhan Chen, Hong Liu, Ying Cui, Travis Nemkov, Julie A. Reisz, George Edwards III, Fred A. Perreira, Rodney E. Kellems, Claudio Soto, Angelo D’Alessandro and Yang Xia, 17 June 2021, PLOS Biology.
DOI: 10.1371/journal.pbio.3001239

Highly Chirped Laser Pulses Defy
Highly Chirped Laser Pulses Defy “Conventional Wisdom”
Chirped Pulse

An illustration of the optical fiber Kerr resonator, which Rochester researchers used with a spectral filter to create highly chirped laser pulses. The rainbow pattern in the foreground shows how the colors of a chirped laser pulse are separated in time. Credit: University of Rochester illustration / Michael Osadciw

University of Rochester researchers describe first highly chirped pulses created by a using a spectral filter in a Kerr resonator.

The 2018 Nobel Prize in Physics was shared by researchers who pioneered a technique to create ultrashort, yet extremely high-energy laser pulses at the University of Rochester.

Now researchers at the University’s Institute of Optics have produced those same high-powered pulses—known as chirped pulses—in a way that works even with relatively low-quality, inexpensive equipment. The new work could pave the way for:

  • Better high-capacity telecommunication systems
  • Improved astrophysical calibrations used to find exoplanets
  • Even more accurate atomic clocks
  • Precise devices for measuring chemical contaminants in the atmosphere

In a paper in Optica, the researchers describe the first demonstration of highly chirped pulses created by a using a spectral filter in a Kerr resonator—a type of simple optical cavity that operates without amplification. These cavities have stirred wide interest among researchers because they can support “a wealth of complicated behaviors including useful broadband bursts of light,” says coauthor William Renninger, assistant professor of optics.

By adding the spectral filter, the researchers can manipulate a laser pulse in the resonator to widen its wavefront by separating the beam’s colors.

The new method is advantageous because “as you widen the pulse, you’re reducing the peak of the pulse, and that means you can then put more overall energy into it before it reaches a high peak power that causes problems,” Renninger says.

The new work is related to the approach used by Nobel laureates Donna Strickland ’89 (PhD) and Gerard Mourou, who helped usher in a revolution in the use of laser technology when they pioneered chirped pulse amplification while doing research at the University’s Laboratory for Laser Energetics.

The work takes advantage of the way light is dispersed as it passes through optical cavities. Most prior cavities require rare “anomalous” dispersion, which means that the blue light travels faster than red light.

However, the chirped pulses live in “normal” dispersion cavities in which red light travels faster. The dispersion is called “normal” because it is the much more common case, which will greatly increase the number of cavities that can generate pulses.

Prior cavities are also designed to have less than one percent loss, whereas the chirped pulses can survive in the cavity despite very high energy loss. “We’re showing chirped pulses that remain stable even with more than 90 percent energy loss, which really challenges the conventional wisdom,” Renninger says.

“With a simple spectral filter, we are now using loss to generate pulses in lossy and normal dispersion systems. So, in addition to improved energy performance, it really opens up what kinds of systems can be used.”

Other collaborators include lead author Christopher Spiess, Qiang Yang, and Xue Dong, all current and former graduate research assistants in Renninger’s lab, and Victor Bucklew, a former postdoctoral associate in the lab.

“We’re very proud of this paper,” Renninger says. “It has been a long time coming.”

Reference: “Chirped dissipative solitons in driven optical resonators” by Christopher Spiess, Qian Yang, Xue Dong, Victor G. Bucklew and William H. Renninger, 10 June 2021, Optica.
DOI: 10.1364/OPTICA.419771

The University of Rochester and the National Institute of Biomedical Imaging and Bioengineering at the National Institutes of Health supported this project with funding.

NASA Struggles to Fix Failure of Hubble Space Telescope's 1980s Computer
NASA Struggles to Fix Failure of Hubble Space Telescope’s 1980s Computer
Hubble Space Telescope in Orbit

The Hubble Space Telescope was launched by the space shuttle Discovery on April 24, 1990. Avoiding distortions of the atmosphere, Hubble has an unobstructed view peering to planets, stars, and galaxies, some more than 13.4 billion light-years away. Credit: NASA

NASA continues to work on resolving an issue with the payload computer on the Hubble Space Telescope. The operations team will be running tests and collecting more information on the system to further isolate the problem. The science instruments will remain in a safe mode state until the issue is resolved. The telescope itself and science instruments remain in good health.

The computer halted on Sunday, June 13. An attempt to restart the computer failed on Monday, June 14. Initial indications pointed to a degrading computer memory module as the source of the computer halt. When the operations team attempted to switch to a back-up memory module, however, the command to initiate the backup module failed to complete. Another attempt was conducted on both modules Thursday evening to obtain more diagnostic information while again trying to bring those memory modules online. However, those attempts were not successful.

The payload computer is a NASA Standard Spacecraft Computer-1 (NSSC-1) system built in the 1980s that is located on the Science Instrument Command and Data Handling unit. The computer’s purpose is to control and coordinate the science instruments and monitor them for health and safety purposes. It is fully redundant in that a second computer, along with its associated hardware, exists on orbit that can be switched over to in the event of a problem. Both computers can access and use any of four independent memory modules, which each contain 64K of Complementary Metal-Oxide Semiconductor (CMOS) memory. The payload computer uses only one memory module operationally at a time, with the other three serving as backups.

Launched in 1990, Hubble has contributed greatly to our understanding of the universe over the past 30 years.

A building material from banana peels and algae have been created
A building material from banana peels and algae have been created

The development is the work of specialists from the Institute of Industrial Sciences at the University of Tokyo.

Japanese scientists have created a new building material, for the production of which they used food waste, the Daily Mail writes.

The development is the work of specialists from the Institute of Industrial Sciences of the University of Tokyo.

For the new building material, scientists have used banana peels, cabbage leaves, algae, and other food waste. Initially, they were dried to a powdery state, after which they were mixed with water and subjected to strong heat, adds BTA.

The tests performed show that the obtained material exceeds the target bending strength and does not yield to the strength of the concrete.

Scientists also used salt and sugar to create the new building material, but they did not affect its strength.

Researchers at the University of Tokyo are squeezing cabbage, fruit peels, and other food scraps into sturdy building material.

The witch in Hansel and Gretel’s tale may have been something about her edible house and attractiveness to children. A research team from the Institute of Industrial Sciences at the University of Tokyo has discovered how to make durable, healthy, and still edible building materials from food.

Sprinkled cabbage leaves, seaweed, and banana peels may not be as exciting as gingerbread and pastries, but they could be part of a sustainable construction product recipe.

“Our goal was to use ordinary seaweed and food scraps to build materials that are at least as strong as concrete,” said Yuya Sakai, a specialist in sustainable building materials and lead author of an upcoming material study, on Tuesday. food waste, we also wanted to determine if the recycling process has affected the taste of the original materials. ”

The team is testing a hot pressing technique commonly used to compress wood dust into building materials. Instead of wood, they are vacuum dried and then sprayed with a variety of food waste, including onion and fruit peels, as well as cabbage.

“The processing technique consists of mixing the food powder with water and spices, then pressing the mixture into a mold at high temperature,” the university said. All products obtained, except for the pumpkin skin, passed the team’s stress tests.

Researchers have found a solution to the pumpkin problem. “We also found that Chinese cabbage leaves, which produce material more than three times stronger than concrete, can be mixed with a weaker pumpkin material to provide effective strengthening., said Kota Machida, project associate.

The molded materials remained edible, although the team did not say whether they were difficult to chew. Even leaving the materials exposed to the air for four months did not change the taste and there was no problem with decay.

The development of potentially plasterable materials is still at an early stage, but maybe one day you could build your own home from them. This will lead the witch’s house to the modern age.

The Solar System is our neighbourhood
The Solar System is our neighbourhood

The Solar System is humanity’s space neighborhood, where all the lives of all humans will take place. Everyone knows about the Sun and the planets, but rarely does anyone know any more details about it. Learning more about it means learning more about humanity’s origin. That is one of the reasons why humans have always set their eyes up to the sky.

The scientific revolution began with the publication of Nicolaus Copernicus’ De revolutionibus orbium coelestium where he explained that the Sun is at the center of the Solar System. But the Sun is not only the center of the Solar System, it is the Solar System, as it contains more than 99,75% of the total mass in the Solar System. There are billions of other bodies in the Solar System such as comets, asteroids, and the 8 planets, but all these only accounts for the remaining 0,25% of the total mass.

Since Copernicus’s discovery, it is well established that all these bodies rotate around the Sun, and the further it gets from the Sun, the slower they rotate. But, where is the end of the Solar System? There’s no clear answer and it depends on the definition used.

The end of the Solar System can be considered to be located at the limit of the heliosphere, this is, the region where the radiation of the Sun is stronger than the radiation of the rest of the stars. In this case, the Solar System’s edge is at 100 times the distance between the Earth and the Sun or 15000 million kilometers. In other words, if you drove your car at 300km/h, it would take you 5700 years to reach there. The most impressive fact is that 2 human-made spacecraft have already crossed that border: Voyager 1 and Voyager 2.

The other way to define the Solar System is the region over which the gravitational force of the Sun is the biggest force over a body. In this case, the limit is much further, at 50000 times the distance between the Earth and the Sun. At this point lies the Oort cloud, a cloud of billions of icy comets surrounding the Sun.

In this vast region, there are billions of bodies rotating in ellipses around the Sun, and all of them were formed through the same process. Before the Sun even existed, 4500 million years ago, there was a nebula. A nebula is like a giant cloud in space. Because of a distant supernova, which is the explosion of a star, the nebula was extremely compressed causing most of the hydrogen in the nebula to come together. The mass of hydrogen was so big that its gravity started attracting more hydrogen, and the friction between all the particles raised the temperature and pressure until it was high enough for nuclear fusion to take place. This moment is the birth of the Sun.

The rest of the bodies in the Solar System were created similarly by the particles that were not attracted by the Sun. The lighter elements were pushed far by the Sun’s radiation and so the gas giants were formed: Jupiter, Saturn, Uranus, and Neptune. The heavier elements, on the other side, were stuck closer to the Sun forming the rocky planets: Mercury, Venus, Earth and Mars. Finally, asteroids and comets were formed the same way, but they did not manage to attract enough material to grow to the size of a planet. Most asteroids are located in the asteroid belt between Mars and Jupiter and they can’t for bigger bodies because of Jupiter’s gravitational pull. Similarly, comets usually come from Kuiper’s belt, a region beyond Neptune, and the main difference with asteroids is that they are made of ammonia, methane, and water ice instead of rock and metal.

In the beginning, there were trillions of smaller bodies that got gravitationally attracted to each other and started growing in size through a process called accretion, forming only a handful of protoplanets and all the asteroids and comets. During this time, the Solar System was extremely dangerous with millions of astronomical collisions, as that’s how bodies would grow in size. Finally, all the protoplanets either crashed with each other forming the planets currently present in the Solar System or became their moons.

Before the nebula was compressed, it was a 3-dimensional amorphous cloud, however, the Solar System is now mainly flat with all bodies rotating in the same dimension. The reason behind this is the conservation of angular momentum, one of the fundamental properties of the universe. This is the same reason why ice-skaters pull their arms closer to spin faster.

When collisions started, bodies were rotating around the Sun in every direction, similar to how we represent an atom (although that’s not what an atom looks like). However, all the collisions ended up canceling out the velocities in the directions perpendicular to the rotation plane, only leaving bodies to move in that direction, giving it the actual shape.

The size of the Solar System is almost incomprehensible on a human scale, however, when zooming out, it is nothing but a speck of dust in the vastness of our galaxy, the Milky Way; and our Sun becomes one star between billions.

Until not long ago, the Solar System was the only system with known planets, giving it a special value. However, in 1992, the first exoplanet was discovered (an exoplanet is a planet orbiting a star that is not the Sun) removing the “special” tag from our Solar System. And ever since then thousands of more planets have been found, with the most recent estimates putting the total number of planets in the Milky Way somewhere between 100-400 billion.

This opens up the debate about life somewhere else in the universe. Are we alone in the universe?