Handheld
Handheld “MasSpec Pen” Reveals Meat and Fish Fraud in Seconds
MasSpec Pen

The MasSpec Pen can authenticate the type and purity of meat samples in as little as 15 seconds. Credit: Adapted from Journal of Agricultural and Food Chemistry 2021, DOI: 10.1021/acs.jafc.0c07830

Meat and fish fraud are global problems, costing consumers billions of dollars every year. On top of that, mislabeling products can cause problems for people with allergies, religious or cultural restrictions. Current methods to detect this fraud, while accurate, are slower than inspectors would like. Now, researchers reporting in ACS’ Journal of Agricultural and Food Chemistry have optimized their handheld MasSpec Pen to identify common types of meat and fish within 15 seconds.

News stories of food fraud, such as beef being replaced with horse meat, and cheaper fish being branded as premium fillets, have led people to question if what is on the label is actually in the package. To combat food adulteration, the U.S. Department of Agriculture conducts regular, random inspections of these products.

Although current molecular techniques, such as the polymerase chain reaction (PCR), are highly accurate, these analyses can take hours to days, and are often performed at off-site labs. Previous studies have devised more direct and on-site food analysis methods with mass spectrometry, using the amounts of molecular components to verify meat sources, but they also destroyed samples during the process or required sample preparation steps.

More recently, Livia Eberlin and colleagues developed the MasSpec Pen — a handheld device that gently extracts compounds from a material’s surface within seconds and then analyzes them on a mass spectrometer. So, the team wanted to see whether this device could rapidly and effectively detect meat and fish fraud in pure filets and ground products.

The researchers used the MasSpec Pen to examine the molecular composition of grain-fed and grass-fed beef, chicken, pork, lamb, venison, and five common fish species collected from grocery stores. Once the device’s tip was pressed against a sample, a 20-μL droplet of solvent was released, extracting sufficient amounts of molecules within three seconds for accurate analysis by mass spectrometry. The whole process took 15 seconds, required no preprocessing, and the liquid extraction did not harm the samples’ surfaces.

Then the team developed authentication models using the unique patterns of the molecules identified, including carnosine, anserine, succinic acid, xanthine and taurine, to distinguish pure meat types from each other, beef based on feeding habit and among the five fish species.

Finally, the researchers applied their models to the analysis of test sets of meats and fish. For these samples, all models had a 100% accuracy identifying the protein source, which is as good as the current method of PCR and approximately 720 times faster.

The researchers say they plan to expand the method to other meat products and integrate the MasSpec Pen into a portable mass spectrometer for on-site meat authentication.

Reference: “Rapid Analysis and Authentication of Meat Using the MasSpec Pen Technology” by Abigail N. Gatmaitan, John Q. Lin, Jialing Zhang and Livia S. Eberlin, 10 March 2021, Journal of Agricultural and Food Chemistry.
DOI: 10.1021/acs.jafc.0c07830

The authors acknowledge funding from the Welch Foundation and the Gordon and Betty Moore Foundation.

Preliminary Data Suggests Mixing COVID-19 Vaccines Increases Frequency of Adverse Reactions
Preliminary Data Suggests Mixing COVID-19 Vaccines Increases Frequency of Adverse Reactions

COVID 19 Vaccines

  • Research, from Com-COV study comparing mixed dosing schedules of Pfizer / Oxford-AstraZeneca vaccines, shows increase in the frequency of mild-moderate symptoms in those receiving either mixed dosing schedule
  • Adverse reactions were short-lived, with no other safety concerns
  • Impact of mixed schedules on immunogenicity unknown as yet, with data to follow from this study

Researchers running the University of Oxford-led Com-COV study — launched earlier this year to investigate alternating doses of the Oxford-AstraZeneca vaccine and the Pfizer vaccine — have today reported preliminary data revealing more frequent mild to moderate reactions in mixed schedules compared to standard schedules.

Writing in a peer-reviewed Research Letter published in the Lancet, they report that, when given at a four-week interval, both of the ‘mixed’ schedules (Pfizer-BioNTech followed by Oxford-AstraZeneca, and Oxford-AstraZeneca followed by Pfizer-BioNTech) induced more frequent reactions following the 2nd, ‘boost’ dose than the standard, ‘non-mixed’ schedules. They add that any adverse reactions were short-lived and there were no other safety concerns.

Matthew Snape, Associate Professor in Paediatrics and Vaccinology at the University of Oxford, and Chief Investigator on the trial, said:

“Whilst this is a secondary part of what we are trying to explore through these studies, it is important that we inform people about these data, especially as these mixed-doses schedules are being considered in several countries. The results from this study suggest that mixed dose schedules could result in an increase in work absences the day after immunization, and this is important to consider when planning immunization of health care workers.

“Importantly, there are no safety concerns or signals, and this does not tell us if the immune response will be affected. We hope to report these data in the coming months. In the meantime, we have adapted the ongoing study to assess whether early and regular use of paracetamol reduces the frequency of these reactions.”

They also noted that as the study data was recorded in participants aged 50 and above, there is a possibility such reactions may be more prevalent in younger age groups.

Reference: 13 May 2021, The Lancet.

About the Com-Cov trial:

The study has been classified as an Urgent Public Health study by the NIHR and is being undertaken by NISEC and the Oxford Vaccine Group, with funding of £7 million from the government through the Vaccines Taskforce.

The University of Oxford is leading the study, run by the National Immunisation Schedule Evaluation Consortium (NISEC) and backed by £7 million of government funding from the Vaccines Taskforce.

It aims to evaluate the feasibility of using a different vaccine for the initial ‘prime’ vaccination to the follow-up ‘booster’ vaccination, helping policymakers explore whether this could be a viable route to increase the flexibility of vaccination programs.

The trial recruited 830 volunteers aged 50 and above from eight National Institute for Health Research (NIHR) supported sites in England to evaluate the four different combinations of prime and booster vaccination: a first dose of the Oxford-AstraZeneca vaccine followed by boosting with either the Pfizer vaccine or a further dose of the Oxford-AstraZeneca vaccine, or a first dose of the Pfizer vaccine followed by boosting with either the Oxford-AstraZeneca vaccine or a further dose of the Pfizer vaccine.

In April, the researchers expanded the program to include the Moderna and Novavax vaccines in a new study (Com-Cov2), run across nine National Institute for Health Research supported sites by NISEC and backed through funding from the Vaccines Taskforce and the Coalition for Epidemic Preparedness Innovations. Volunteers would have received either the Oxford-AstraZeneca or Pfizer vaccine, and then randomly allocated to receive either the same vaccine for their second dose or a dose of the COVID-19 vaccines produced by Moderna or Novavax.

The six new ‘arms’ of the trial each aimed to recruit 175 candidates, adding a further 1050 recruits into this program.

Both studies are designed as so-called ‘non-inferiority’ studies — the intent is to demonstrate that mixing is not substantially worse than not mixing — and will compare the immune system responses to the gold-standard responses reported in previous clinical trials of each vaccine.

About the Oxford Vaccine Group

The Oxford Vaccine Group (OVG) conducts studies of new and improved vaccines for children and adults and is based in the Department of Paediatrics at the University of Oxford. The multidisciplinary group includes consultants in vaccinology, a Director of Clinical Trials, a Senior Clinical Trials Manager, adult and pediatric clinical research fellows, adult and pediatric research nurses, project managers, statisticians, QA manager, Clinical Trials IT and Development Lead, and an administration team. The team also includes post-doctoral scientists, research assistants, and DPhil students and we work together with professionals from a range of specialties such as immunologists, microbiologists, epidemiologists, health communicators, and a sociologist, a community pediatrician, the local Health Protection team, and a bioethicist.

OVG is a UKCRC registered clinical trials unit working in collaboration with the Primary Care Trials Unit at the University (registration number: 52).

About the National Institute for Health Research

The National Institute for Health Research (NIHR) is the nation’s largest funder of health and care research. The NIHR was established in 2006 to improve the health and wealth of the nation through research, and is funded by the Department of Health and Social Care. In addition to its national role, the NIHR commissions applied health research to benefit the poorest people in low- and middle-income countries, using Official Development Assistance funding.

About the Vaccines Taskforce

The Vaccines Taskforce (VTF) is a joint unit in the Department for Business, Energy and Industrial Strategy (BEIS) and Department for Health and Social Care (DHSC). The VTF was set up to ensure that the UK population has access to clinically effective and safe vaccines as soon as possible, while working with partners to support international access to successful vaccines.

The Vaccines Taskforce comprises a dedicated team of private sector industry professionals and officials from across government who are working at speed to build a portfolio of promising vaccine candidates that can end the global pandemic.

The UK has secured early access to 517 million doses of eight of the most promising vaccine candidates. This includes agreements with:

  • BioNTech/Pfizer for 100 million doses
  • Valneva for 100 million doses
  • Oxford/AstraZeneca who will work to supply 100 million doses of the vaccine being developed by Oxford University
  • GlaxoSmithKline and Sanofi Pasteur to buy 60 million doses
  • Novavax for 60 million doses
  • Janssen for 30 million doses of their not-for-profit vaccine, alongside funding of their Phase 3 clinical trial
  • Moderna for 17 million doses
  • CureVac for 50 million doses

The Vaccines Taskforce’s approach to securing access to vaccines is through:

  • procuring the rights to a diverse range of promising vaccine candidates to spread risk and optimize chances for success
  • providing funding for clinical studies, diagnostic monitoring and regulatory support to rapidly evaluate vaccines for safety and efficacy
  • providing funding and support for manufacturing scale-up and fill and finish at risk so that the UK has vaccines produced at scale and ready for administration should any of these prove successful

About the University of Oxford

Oxford University has been placed number 1 in the Times Higher Education World University Rankings for the fifth year running, and at the heart of this success is our ground-breaking research and innovation.

Oxford is world-famous for research excellence and home to some of the most talented people from across the globe. Our work helps the lives of millions, solving real-world problems through a huge network of partnerships and collaborations. The breadth and interdisciplinary nature of our research sparks imaginative and inventive insights and solutions.

Through its research commercialization arm, Oxford University Innovation, Oxford is the highest university patent filer in the UK and is ranked first in the UK for university spinouts, having created more than 200 new companies since 1988. Over a third of these companies have been created in the past three years.

Study Shows New Obesity Treatment Semaglutide Reduces Body Weight Regardless of Patient Characteristics
Study Shows New Obesity Treatment Semaglutide Reduces Body Weight Regardless of Patient Characteristics

Females and those with lower body weight have better results.

New research presented at this year’s European Congress on Obesity (held online, 10-13 May) shows that treatment with the drug semaglutide reduces body weight in adults with overweight or obesity, regardless of their baseline characteristics.

However, the study showed that female participants had slightly better results than males and also that participants with the lowest starting body weight responded slightly better than those with higher body weights. The study is by Professor Robert Kushner, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA, and colleagues.

Semaglutide is already approved for treatment for type 2 diabetes in multiple countries, and is under development for treatment of obesity. The STEP trials published over the past year have established the efficacy and safety of semaglutide 2.4 mg in treating people with overweight and obesity. In this new analysis of data from the STEP 1 trial (see link below), the researchers investigated weight loss in subgroups of participants based on their baseline characteristics.

In STEP 1, adults without type 2 diabetes with either a body mass index (BMI) of at least 27 kg/m² plus one or more weight-related comorbidities, or a BMI of 30 kg/m² or above, were enrolled. Participants were randomized to a once-weekly injection of semaglutide 2.4 mg or placebo, both plus lifestyle intervention, for 68 weeks.

The authors looked at what proportions of the participants achieved different levels of weight loss with semaglutide from baseline to week 68 (?20%, 15-<20%, 10-<15%, or 5-<10%) when grouped by different baseline characteristics (age, sex, race [White, Asian, Black or African American, or other], body weight, BMI, waist circumference and glycaemic status [normal blood sugar, or pre-diabetes]). Mean percent weight loss with semaglutide from baseline to week 68 was analyzed separately by sex (male, female) and baseline body weight (?115 kg, 100-<115 kg, 90-<100 kg, <90 kg) subgroup.

The original study included 1,961 randomized participants (mean age 46 years, body weight 105.3 kg, BMI 37.9 kg/m²; 74.1% female). For categorical weight loss, the observed proportions of participants with ?20%, 15-<20%, 10-<15% and 5-<10% weight loss at week 68 were 34.8%, 19.9%, 20.0% and 17.6% with semaglutide vs 2.0%, 3.0%, 6.8% and 21.2% with placebo, respectively.

The distribution of participants across weight-loss groups did not appear to be affected by any baseline characteristics, except sex and baseline body weight. Mean percent weight loss at week 68 with semaglutide was greater among females (-18.4%) than males (-12.9%), and in participants with lower vs higher baseline body weight (-18.6% for participants with <90 kg body weight at baseline; -13.9% for participants with ?115 kg baseline body weight).

The authors conclude: “We found that weight loss with once-weekly injections of semaglutide 2.4 mg was seen in all subgroups evaluated and was generally not influenced by baseline characteristics. The exceptions were sex and baseline body weight; female sex and a low baseline body weight were associated with a slightly greater response to semaglutide. These data support the use of semaglutide 2.4 mg across a broad population of patients with overweight or obesity.”

No Lasting Benefit to Surgically Placed Tubes Over Antibiotics for Childhood Ear Infections
No Lasting Benefit to Surgically Placed Tubes Over Antibiotics for Childhood Ear Infections

Surgery Children's Hospital

There is no long-term benefit to surgically placing tympanostomy tubes in a young child’s ears to reduce the rate of recurrent ear infections during the ensuing two years compared with giving oral antibiotics to treat ear infections, a randomized trial led by UPMC Children’s Hospital of Pittsburgh and University of Pittsburgh pediatrician-scientists determined.

The trial results, published today (May 12, 2021) in the New England Journal of Medicine, are among the first since the pneumococcal vaccine was added to pediatric vaccination schedules, providing updated evidence that may help shape pediatric guidelines on treating recurrent ear infections. Importantly, despite their greater use of antibiotics, the trial found no evidence of increased bacterial resistance among children in the medical-management group.

“Subjecting a young child to the risks of anesthesia and surgery, the possible development of structural changes of the tympanic membrane, blockage of the tube or persistent drainage through the tube for recurrent ear infections, which ordinarily occur less frequently as the child ages, is not something I would recommend in most instances,” said lead author Alejandro Hoberman, M.D., director of the Division of General Academic Pediatrics at UPMC Children’s Hospital and the Jack L. Paradise Endowed Professor of Pediatric Research at Pitt’s School of Medicine.

Alejandro Hoberman

Director of the Division of General Academic Pediatrics, UPMC Children’s Hospital and the Jack L. Paradise Endowed Professor of Pediatric Research, University of Pittsburgh School of Medicine. Credit: UPMC

“We used to often recommend tubes to reduce the rate of ear infections, but in our study, episodic antibiotic treatment worked just as well for most children,” he said. “Another theoretical reason to resort to tubes is to use topical ear drops rather than systemic oral antibiotics in subsequent infections in the hope of preventing the development of bacterial resistance, but in this trial, we did not find increased resistance with oral antibiotic use. So, for most children with recurrent ear infections, why undergo the risks, cost and nuisance of surgery?”

Next to the common cold, ear infections are the most frequently diagnosed illness in U.S. children. Ear infections can be painful, force lost time at work and school, and may cause hearing loss. Tympanostomy tube placement, which is a surgical procedure to insert tiny tubes into a child’s eardrums to prevent the accumulation of fluid, is the most common operation performed on children after the newborn period.

Hoberman and his team enrolled 250 children ages 6 to 35 months of age at UPMC Children’s Hospital, Children’s National Medical Center in Washington, D.C., and Kentucky Pediatric and Adult Research in Bardstown, Ky. All of the children had had medically verified recurrent ear infections and had received the pneumococcal conjugate vaccine. They were randomly assigned to receive “medical management,” which involved receiving oral antibiotics at the time of ear infections, or the surgical insertion of tubes and antibiotic ear drops. The children were followed for two years.

Overall, there were no differences between children in the two groups when it came to the rate or severity of ear infections. And, though the children in the medical management group received more antibiotics, there also was no evidence of increased antimicrobial resistance in samples taken from the children. The trial also didn’t find any difference between the two groups in the children’s quality of life or in the effect of the children’s illness on parents’ quality of life.

One short-term benefit of placing tympanostomy tubes was that, on average, it took about two months longer for a child to develop a first ear infection after tubes were placed, compared with children whose ear infections were managed with antibiotics.

Another finding of the trial was that the rate of ear infections among children in both groups fell with increasing age. The rate of infections was 2.6 times higher in children younger than 1 year, compared with the oldest children in the trial, those between 2 and 3 years, regardless of whether they received medical management or tube insertion.

“Most children outgrow ear infections as the Eustachian tube, which connects the middle-ear with the back of the throat, works better,” Hoberman said. “Previous studies of tubes were conducted before children were universally immunized with pneumococcal conjugate vaccine, which also has reduced the likelihood of recurrent ear infections. It’s important to recognize that most children outgrow ear infections as they grow older. However, we must appreciate that for the relatively few children who continue to meet criteria for recurrent ear infections — three in six months or four in one year — after having met those criteria initially, placement of tympanostomy tubes may well be beneficial.”

Reference: 12 May 2021, New England Journal of Medicine.
DOI: 10.1056/NEJMoa2027278

Additional study authors are Diego Preciado, M.D., Ph.D., and Daniel E. Felton, M.D., both of Children’s National Medical Center; Jack L. Paradise, M.D., David H. Chi, M.D., MaryAnn Haralam, M.S.N., C.R.N.P., Diana H. Kearney, R.N., C.C.R.C., Sonika Bhatnagar, M.D., M.P.H., Gysella B. Muñiz Pujalt, M.D., Timothy R. Shope, M.D., M.P.H., Judith M. Martin, M.D., Marcia Kurs-Lasky, M.S., Hui Liu, M.S., Kristin Yahner, M.S., Jong-Hyeon Jeong, Ph.D., Jennifer P. Nagg, R.N., Joseph E. Dohar, M.D., and Nader Shaikh, M.D., M.P.H., all of Pitt; Norman L. Cohen, M.D., and Brian Czervionke, M.D., both of UPMC Children’s Community Pediatrics; and Stan L. Block, M.D., of Kentucky Pediatric and Adult Research.

This research was funded by National Institute on Deafness and Other Communication Disorders grant NCT02567825.

New Research Shows COVID-19 Alters Gray Matter Volume in the Brain
New Research Shows COVID-19 Alters Gray Matter Volume in the Brain

Pointing Brain X-Ray

Covid-19 patients who receive oxygen therapy or experience fever show reduced gray matter volume in the frontal-temporal network of the brain, according to a new study led by researchers at Georgia State University and the Georgia Institute of Technology.

The study found lower gray matter volume in this brain region was associated with a higher level of disability among Covid-19 patients, even six months after hospital discharge.

Gray matter is vital for processing information in the brain and gray matter abnormality may affect how well neurons function and communicate. The study, published in the May 2021 issue of Neurobiology of Stress, indicates gray matter in the frontal network could represent a core region for brain involvement in Covid-19, even beyond damage related to clinical manifestations of the disease, such as stroke.

The researchers, who are affiliated with the Center for Translational Research in Neuroimaging and Data Science (TReNDS), analyzed computed tomography scans in 120 neurological patients, including 58 with acute Covid-19 and 62 without Covid-19, matched for age, gender and disease. The work was done jointly with Enrico Premi and his colleagues at the University of Brescia in Italy, who provided the data for the study. They used source-based morphometry analysis, which boosts the statistical power for studies with a moderate sample size.

Kuaikuai Duan and Vince Calhoun

Researchers Kuaikuai Duan and Vince Calhoun have found that neurological complications of Covid-19 patients may be linked to lower gray matter volume in the front region of the brain even six months after hospital discharge. Credit: Vince Calhoun, Georgia Tech

“Science has shown that the brain’s structure affects its function, and abnormal brain imaging has emerged as a major feature of Covid-19,” said Kuaikuai Duan, the study’s first author, a graduate research assistant at TReNDS and Ph.D. student in Georgia Tech’s School of Electrical and Computer Engineering. “Previous studies have examined how the brain is affected by Covid-19 using a univariate approach, but ours is the first to use a multivariate, data-driven approach to link these changes to specific Covid-19 characteristics (for example fever and lack of oxygen) and outcome (disability level).”

The analysis showed patients with higher levels of disability had lower gray matter volume in the superior, medial and middle frontal gyri at discharge and six months later, even when controlling for cerebrovascular diseases. Gray matter volume in this region was also significantly reduced in patients receiving oxygen therapy compared to patients not receiving oxygen therapy. Patients with fever had a significant reduction in gray matter volume in the inferior and middle temporal gyri and the fusiform gyrus compared to patients without fever. The results suggest Covid-19 may affect the frontal-temporal network through fever or lack of oxygen.

Reduced gray matter in the superior, medial, and middle frontal gyri was also present in patients with agitation compared to patients without agitation. This implies that gray matter changes in the frontal region of the brain may underlie the mood disturbances commonly exhibited by Covid-19 patients.

“Neurological complications are increasingly documented for patients with Covid-19,” said Vince Calhoun, senior author of the study and director of TReNDS. Calhoun is Distinguished University Professor of Psychology at Georgia State and holds appointments in the School of Electrical and Computer Engineering at Georgia Tech and in neurology and psychiatry at Emory University. “A reduction of gray matter has also been shown to be present in other mood disorders such as schizophrenia and is likely related to the way that gray matter influences neuron function.”

The study’s findings demonstrate changes to the frontal-temporal network could be used as a biomarker to determine the likely prognosis of Covid-19 or evaluate treatment options for the disease. Next, the researchers hope to replicate the study on a larger sample size that includes many types of brain scans and different populations of Covid-19 patients.

Reference: “Alterations of frontal-temporal gray matter volume associate with clinical measures of older adults with COVID-19” by Kuaikuai Duan, Enrico Premi, Andrea Pilotto, Viviana Cristillo, Alberto Benussi, Ilenia Libri, Marcello Giunta, H. Jeremy Bockholt, Jingyu Liu, Riccardo Campora, Alessandro Pezzini, Roberto Gasparotti, Mauro Magoni, Alessandro Padovani and Vince D. Calhoun, 13 April 2021, Neurobiology of Stress.
DOI: 10.1016/j.ynstr.2021.100326

TReNDS is a partnership among Georgia State, Georgia Tech and Emory University and is focused on improving our understanding of the human brain using advanced analytic approaches. The center uses large-scale data sharing and multi-modal data fusion techniques, including deep learning, genomics, brain mapping and artificial intelligence.

Genetic Risk of Heart Disease May Be Due to Low Omega 3-Linked Biomarker Found in Fish Oils
Genetic Risk of Heart Disease May Be Due to Low Omega 3-Linked Biomarker Found in Fish Oils

Omega-3 Food Sources

People who are genetically more likely to suffer from cardiovascular diseases may benefit from boosting a biomarker found in fish oils, a new study suggests.

In a genetic study in 1,886 Asian Indians published in PLOS ONE today (Wednesday, May 12, 2021), scientists have identified the first evidence for the role of adiponectin, an obesity-related biomarker, in the association between a genetic variation called omentin and cardiometabolic health.

The team, led by Professor Vimal Karani from the University of Reading, observed that the role of adiponectin was linked to cardiovascular disease markers that were independent of common and central obesity among the Asian Indian population.

Prof Vimal Karani, Professor of Nutrigenetics and Nutrigenomics at the University of Reading said:

“This is an important insight into one way that people who are not obese may develop heart disease, through low concentrations of a biomarker in the body called adiponectin. It may also demonstrate why certain lifestyle factors such as consumption of oily fish and regular exercise are so important for warding off the risk of heart disease.

“We studied Asian Indian populations who have a particular genetic risk of developing heart disease and did see that the majority of our participants were already cardiometabolically unhealthy. However, the omentin genetic variation that we studied is prevalent across diverse ethnic groups and warrants further work to see whether omentin is playing a role in heart disease risk in other groups too.”

The Asian Indian population who took part in the study were found to have a significant association between low levels of adiponectin and cardiovascular disease, even after adjusting for factors normally linked with heart disease.

Participants in the study were screened and assessed based on a range of cardiovascular measures including BMI, fasting blood sugar, and cholesterol, and more than 80% of those who took part being assessed as cardiometabolically unhealthy.

Further analysis showed that those with genetic variation in omentin production also had less of the biomarker adiponectin in their body.

Professor Vimal Karani said:

“What we can see clearly from the observations is that there is a three-stage process going on where the omentin gene difference is contributing to the low biomarker adiponectin, which in turn seems to be linked to worse outcomes and risk of heart disease.

“The omentin gene itself works to produce a protein in the body that has been shown to have anti-inflammatory and cardioprotective effects, and variations in the omentin gene have previously linked to cardiometabolic diseases. The findings suggests that people can develop cardiometabolic diseases due to this specific omentin genetic risk, if they have low levels the biomarker adiponectin.”

Reference: 12 May 2021, PLOS ONE.
DOI: 10.1371/journal.pone.0238555

Funding: The Chennai Willingdon Corporate Foundation supported the CURES field studies.

Tracking Carbon From the Ocean Surface to the Dark
Tracking Carbon From the Ocean Surface to the Dark “Twilight Zone”
Phytoplankton Communities Bloom

Different phytoplankton communities bloom around the Canadian Maritime Provinces and across the northwestern Atlantic Ocean. Credit: NASA/Aqua/MODIS composite collected on March 22, 2021

A seaward journey, supported by both NASA and the National Science Foundation, set sail in the northern Atlantic in early May—the sequel to a complementary expedition, co-funded by NSF, that took place in the northern Pacific in 2018.

The 2021 deployment of NASA’s oceanographic field campaign, called Export Processes in the Ocean from Remote Sensing (EXPORTS), consists of 150 scientists and crew from more than 30 governmental, university, and private non-governmental institutions. The team is spread across three oceanographic research vessels, who will meet in international waters west of Ireland over the underwater Porcupine Abyssal plain. Throughout the field campaign, scientists will be deploying a variety of instruments from aboard the three ships: the RRS James Cook and the RRS Discovery, operated by the National Oceanography Centre in Southampton, UK, plus a third vessel chartered by the Ocean Twilight Zone project of the Woods Hole Oceanographic Institution and operated by the Marine Technology Unit in Vigo, Spain. A total of 52 high-tech platforms, including several autonomous vehicles, will be taking measurements and continuously collecting data.

Diverse Plankton

Diverse plankton from surface waters seen under a microscope. It is so concentrated that you don’t need to zoom to identify. Credit: Laura Holland/ University of Rhode Island

Much of the science focuses on the ocean’s role in the global carbon cycle. Through chemical and biological processes, the ocean removes as much carbon from the atmosphere as all plant life on land. Scientists hope to further explore the mechanisms of the ocean’s biological pump—the process by which carbon from the atmosphere and surface ocean is sequestered long-term in the deep ocean. This process involves microscopic plant-like organisms called phytoplankton, which undergo photosynthesis just like plants on land and can be seen from space by observing changes to the color of the ocean. Their productivity has a significant impact on Earth’s carbon cycle, which then in turn affects Earth’s climate.

“This is the first comprehensive study of the ocean’s biological carbon pump since the Joint Global Ocean Flux study in the 1980s and nineties,” said EXPORTS science lead David Siegel from the University of California, Santa Barbara. “In the interim, we have gotten advanced microscopic imaging tools, genomics, robust chemical and optical sensors and autonomous robots—a bunch of stuff that we didn’t have back then, so we can ask much harder and much more important questions.” Those questions include how much organic carbon is leaving the surface ocean, and what path does it take as it makes its way to the deep where it can be sequestered for long periods of time, from decades to thousands of years.

RRS James Cook Deploying Sampling Rosette

Science and crew aboard the RRS James Cook are deploying a sampling rosette – platform that allows for collection of the water samples and other information from ocean depths, with RRS Discovery and R/V Sarmiento de Gamboa in the distance deploying the same instrumentation simultaneously. Credit: Deborah Steinberg

Scientists know of three major pathways that transport carbon from the atmosphere and upper ocean to the dark “twilight zone” that lies 1,640 feet (500m) or more below the surface: 1) physical ocean mixing and circulation can carry suspended organic matter deep down into the ocean’s interior, 2) particles can sink due to gravity, often after passing through the guts of organisms, and 3) daily vertical migrations of animals that commute between upper and lower ocean levels bring carbon along for the ride.

EXPORTS aims to determine how much carbon is transported by each of these pathways by observing the carbon pump in two very different ocean ecosystems with varying conditions. The researchers chose the northern Pacific and northern Atlantic because they are on the opposite ends of the productivity spectrum (i.e. rates of photosynthesis) and experience two opposing extremes of physical processes such as eddies and currents. Studying contrasting environments will provide the maximum insight for modeling future climate scenarios.

Boarding R/V Sarmiento de Gamboa

The scientist crew boarded the R/V Sarmiento de Gamboa on April 29 after 14 days in quarantine. Credit: Ken Buesseler/ Woods Hole Oceanographic Institution

According to Ivona Cetinić, project scientist and oceanographer at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, the North Pacific is akin to a desert or “simple meadow” on land. It is low in nutrients, in this case iron needed for photosynthesis, and experiences amongst the fewest eddying currents found in the global oceans. Therefore, carbon transport into the deep ocean is primarily driven by tiny animals, called zooplankton, consuming microscopic plant-like phytoplankton and then excreting the digested carbon to the depths below.

Phytoplankton drift in the upper, sunlit layer of the ocean where they can convert carbon dioxide that comes from the atmosphere into organic carbon. When conditions are right, as is often the case in the North Atlantic region this time of year, phytoplankton populations grow or “bloom” so rapidly they can be seen from space.

The North Atlantic also features strong currents that contrast with the North Pacific’s slower moving waters. Along with those, Siegel says they anticipate at least four days of harsh weather during the month-long expedition.

But EXPORTS data doesn’t just apply to the sea—it will also be used to improve satellite technology. Cetinić works with several optical measurements that come from ocean color satellites, which measure light reflected from the ocean surface in parts of the visible spectrum, what we know as the colors of the rainbow. These provide insights such as measurements of the ocean’s temperature, salinity, carbon, and concentrations of a green pigment called chlorophyll. However, the varying species of phytoplankton occupying different parts of the ecosystem and carbon cycle produce different amounts and shades of green chlorophyll, creating nuance in ocean color that current ocean color satellites can’t “see.”

Among the instrumentation deployed during EXPORTS are highly refined, and in some cases experimental, optical instruments to measure ocean color that are akin to instruments which will be aboard future NASA satellites. Researchers will combine these satellite-simulating measurements with the detailed observations of the surface phytoplankton community—through genomics, image analysis or pigment composition—as well as knowledge of their physiology to enable satellites to detect oceanic diversity and ultimately their role in the oceanic carbon cycle.

The next generation of these satellites, NASA’s Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) mission, will be hyperspectral, meaning it will be able to collect data across the entire visible spectrum, and capture information beyond the visible part, including ultraviolet and shortwave infrared.

“What we see while we are on the ground gives us an understanding of what kind of information we will need to see from space in order to capture those critical processes we want to be able to better understand,” Cetinić said. “That drives the development of the space-based technology. In return, data coming from the new Earth observing satellites allow for scientists, such as the ones participating in EXPORTS, to go and find other crucial information or develop new techniques to complement the current, or even inspire a new, Earth observing satellite. This perpetual interplay of technology and science, ultimately benefits the whole of humanity.”

Following the fieldwork campaign, an additional phase of EXPORTS will focus on using the data collected from the Atlantic and Pacific to predict what the carbon transport pathways may look like in future oceans.

“What we currently know is limited to what is happening in oceans today,” said Siegel. “With the ongoing climate-driven changes, seen not only in the ocean but across the Earth systems, we need to be able to predict what’s going to be happening in 2075, and we do not yet have that predictive understanding.”

Because so many characteristics of a single slice of ocean are going to be measured at the same time, existing computer models will have a rich and more complete data set depicting the carbon pump on which to base projections of what might happen in the near future deeper in the ocean—and what the impacts might be on the carbon cycle.

“It’s such a good data set that it is going to be fueling research for decades to come,” said Cetinić.

Both PACE and EXPORTS experienced delays because of the COVID-19 pandemic. Now, to ensure the safety and security of every individual involved, a two-week quarantine was required before sailing and social distancing protocols were enacted for the first week aboard the ships. Siegel says the diversity and dedication of the team members, the unparalleled support from the U.K.’s National Oceanography Centre to ensure the ships and crew are ready and safe for sailing, the sustained commitment from NASA Headquarters, and a great deal of good fortune is the reason that the campaign is still able to go ahead this year.

An Uncrackable Combination: Invisible Ink and Artificial Intelligence
An Uncrackable Combination: Invisible Ink and Artificial Intelligence

Coded Message Cybersecurity Concept

Coded messages in invisible ink sound like something only found in espionage books, but in real life, they can have important security purposes. Yet, they can be cracked if their encryption is predictable. Now, researchers reporting in ACS Applied Materials & Interfaces have printed complexly encoded data with normal ink and a carbon nanoparticle-based invisible ink, requiring both UV light and a computer that has been taught the code to reveal the correct messages.

Even as electronic records advance, paper is still a common way to preserve data. Invisible ink can hide classified economic, commercial, or military information from prying eyes, but many popular inks contain toxic compounds or can be seen with predictable methods, such as light, heat, or chemicals. Carbon nanoparticles, which have low toxicity, can be essentially invisible under ambient lighting but can create vibrant images when exposed to ultraviolet (UV) light — a modern take on invisible ink.

In addition, advances in artificial intelligence (AI) models — made by networks of processing algorithms that learn how to handle complex information — can ensure that messages are only decipherable on properly trained computers. So, Weiwei Zhao, Kang Li, Jie Xu, and colleagues wanted to train an AI model to identify and decrypt symbols printed in a fluorescent carbon nanoparticle ink, revealing hidden messages when exposed to UV light.

Uncrackable Combination of Invisible Ink and Artificial Intelligence

With regular ink, a computer trained with the codebook decodes “STOP” (top); when a UV light is shown on the paper, the invisible ink is exposed, and the real message is revealed as “BEGIN” (bottom). Credit: Adapted from ACS Applied Materials & Interfaces 2021, DOI: 10.1021/acsami.1c01179

The researchers made carbon nanoparticles from citric acid and cysteine, which they diluted with water to create an invisible ink that appeared blue when exposed to UV light. The team loaded the solution into an ink cartridge and printed a series of simple symbols onto paper with an inkjet printer. Then, they taught an AI model, composed of multiple algorithms, to recognize symbols illuminated by UV light and decode them using a special codebook. Finally, they tested the AI model’s ability to decode messages printed using a combination of both regular red ink and the UV fluorescent ink.

With 100% accuracy, the AI model read the regular ink symbols as “STOP,” but when a UV light was shown on the writing, the invisible ink illustrated the desired message “BEGIN.” Because these algorithms can notice minute modifications in symbols, this approach has the potential to encrypt messages securely using hundreds of different unpredictable symbols, the researchers say.

Reference: “Paper Information Recording and Security Protection Using Invisible Ink and Artificial Intelligence” by Yunhuan Yuan, Jian Shao, Mao Zhong, Haoran Wang, Chen Zhang, Jun Wei, Kang Li, Jie Xu and Weiwei Zhao, 20 April 2021, ACS Applied Materials & Interfaces.
DOI: 10.1021/acsami.1c01179

The authors acknowledge funding from the Shenzhen Peacock Team Plan and the Bureau of Industry and Information Technology of Shenzhen through the Graphene Manufacturing Innovation Center (201901161514).

RoboWig: A Robot That Can Help You Untangle Your Hair
RoboWig: A Robot That Can Help You Untangle Your Hair
Robot Hair Brush

A robotic arm setup is equipped with a sensorized soft brush and aided by a camera to study the complex nature of manipulating and brushing hair fibers. Credit: Photo courtesy of MIT CSAIL

Robotic arm equipped with a hairbrush helps with brushing tasks and could be an asset in assistive-care settings.

With rapidly growing demands on health care systems, nurses typically spend 18 to 40 percent of their time performing direct patient care tasks, oftentimes for many patients and with little time to spare. Personal care robots that brush hair could provide substantial help and relief. 

This may seem like a truly radical form of “self-care,” but crafty robots for things like shaving, hair-washing, and makeup are not new. In 2011, the tech giant Panasonic developed a robot that could wash, massage, and even blow-dry hair, explicitly designed to help support “safe and comfortable living of the elderly and people with limited mobility, while reducing the burden of caregivers.” 

Hair-combing bots, however, proved to be less explored, leading scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Soft Math Lab at Harvard University to develop a robotic arm setup with a sensorized soft brush. The robot is equipped with a camera that helps it “see” and assess curliness, so it can plan a delicate and time-efficient brush-out.  

The team’s control strategy is adaptive to the degree of tangling in the fiber bunch, and they put “RoboWig” to the test by brushing wigs ranging from straight to very curly hair.

While the hardware setup of RoboWig looks futuristic and shiny, the underlying model of the hair fibers is what makes it tick. CSAIL postdoc Josie Hughes and her team opted to represent the entangled hair as sets of entwined double helices — think classic DNA strands. This level of granularity provided key insights into mathematical models and control systems for manipulating bundles of soft fibers, with a wide range of applications in the textile industry, animal care, and other fibrous systems.

“By developing a model of tangled fibers, we understand from a model-based perspective how hairs must be entangled: starting from the bottom and slowly working the way up to prevent ‘jamming’ of the fibers,” says Hughes, the lead author on a paper about RoboWig. “This is something everyone who has brushed hair has learned from experience, but is now something we can demonstrate through a model, and use to inform a robot.”

This task at hand is a tangled one. Every head of hair is different, and the intricate interplay between hairs when combing can easily lead to knots. What’s more, if the incorrect brushing strategy is used, the process can be very painful and damaging to the hair.

Previous research in the brushing domain has mostly been on the mechanical, dynamic, and visual properties of hair, as opposed to RoboWig’s refined focus on tangling and combing behavior.

To brush and manipulate the hair, the researchers added a soft-bristled sensorized brush to the robot arm, to allow forces during brushing to be measured. They combined this setup with something called a “closed-loop control system,” which takes feedback from an output and automatically performs an action without human intervention. This created “force feedback” from the brush — a control method that lets the user feel what the device is doing — so the length of the stroke could be optimized to take into account both the potential “pain,” and time taken to brush.

Initial tests preserved the human head — for now — and instead were done on a number of wigs of various hair styles and types. The model provided insight into the behaviors of the combing, related to the number of entanglements, and how those could be efficiently and effectively brushed out by choosing appropriate brushing lengths. For example, for curlier hair, the pain cost would dominate, so shorter brush lengths were optimal.

The team wants to eventually perform more realistic experiments on humans, to better understand the performance of the robot with respect to their experience of pain — a metric that is obviously highly subjective, as one person’s “two” could be another’s “eight.”

“To allow robots to extend their task-solving abilities to more complex tasks such as hair brushing, we need not only novel safe hardware, but also an understanding of the complex behavior of the soft hair and tangled fibers,” says Hughes. “In addition to hair brushing, the insights provided by our approach could be applied to brushing of fibers for textiles, or animal fibers.”

Hughes wrote the paper alongside Harvard University School of Engineering and Applied Sciences PhD students Thomas Bolton Plumb-Reyes and Nicholas Charles; Professor L. Mahadevan of Harvard’s School of Engineering and Applied Sciences, Department of Physics, and Organismic and Evolutionary Biology; and MIT professor and CSAIL Director Daniela Rus. They presented the paper virtually at the IEEE Conference on Soft Robotics (RoboSoft) earlier this month.

The project was supported, in part, by the National Science Foundation’s Emerging Frontiers in Research and Innovation program between MIT CSAIL and the Soft Math Lab at Harvard.

Forecasting a Volcano's Eruption Style Using Early Indicators of Magma Viscosity
Forecasting a Volcano’s Eruption Style Using Early Indicators of Magma Viscosity
Ahu'aila'au Lava Fountaining

Lava fountaining from the most productive eruptive fissure, called fissure 8 at the time and now named Ahu’aila’au, built a cinder cone 55 meters high, about the height of a 10-story building. Most of the 2018 lower East Rift Zone eruption’s 0.8 cubic kilometers of lava erupted from this point. Credit: B. Shiro, USGS

2018 eruption of Kīlauea Volcano in Hawai’i provided scientists with an unprecedented opportunity to identify new factors that could help forecast the hazard potential of future eruptions.

The 2018 eruption of Kīlauea Volcano in Hawai’i provided scientists with an unprecedented opportunity to identify new factors that could help forecast the hazard potential of future eruptions.

The properties of the magma inside a volcano affect how an eruption will play out. In particular, the viscosity of this molten rock is a major factor in influencing how hazardous an eruption could be for nearby communities.

Very viscous magmas are linked with more powerful explosions because they can block gas from escaping through vents, allowing pressure to build up inside the volcano’s plumbing system. On the other hand, extrusion of more viscous magma results in slower-moving lava flows.

“But magma viscosity is usually only quantified well after an eruption, not in advance,” explained Carnegie’s Diana Roman. “So, we are always trying to identify early indications of magma viscosity that could help forecast a volcano’s eruption style.”

Leilani Estates

In May 2018, eruptive fissures opened and deposited lava within the Leilani Estates subdivision on the Island of Hawaii. Over 700 homes were destroyed, displacing more than 2,000 people. Credit: B. Shiro, USGS

She led new work identifying an indicator of magma viscosity that can be measured before an eruption. This could help scientists and emergency managers understand possible patterns of future eruptions. The findings are published in Nature.

The 2018 event included the first eruptive activity in Kīlauea’s lower East Rift Zone since 1960. The first of 24 fissures opened in early May, and the eruption continued for exactly three months. This situation provided unprecedented access to information for many researchers, including Roman and her colleagues–Arianna Soldati and Don Dingwell of Ludwig-Maximilians-University of Munich, Bruce Houghton of University of Hawai’i at Mānoa, and Brian Shiro of the U.S. Geological Survey’s Hawaiian Volcano Observatory.

The event provided a wealth of simultaneous data about the behavior of both high- and low-viscosity magma, as well as about the pre-eruption stresses in the solid rock underlying Kīlauea.

Lava Channel

A fast-moving lava channel flowed from the Ahu’aila’au cone about 10 kilometers away to the ocean, where it covered about 36 square kilometers of land along the way and created 3.5-square-kilometers of new land along the coast. Where the channel slowed down in flat areas, it spread out and formed a braided pattern, seen here. Credit: B. Shiro, USGS

Tectonic and volcanic activity causes fractures, called faults, to form in the rock that makes up the Earth’s crust. When geologic stresses cause these faults to move against each other, geoscientists measure the 3-D orientation and movement of the faults using seismic instruments.

By studying what happened in Kīlauea’s lower East Rift Zone in 2018, Roman and her colleagues determined that the direction of the fault movements in the lower East Rift Zone before and during the volcanic eruption could be used to estimate the viscosity of rising magma during periods of precursory unrest.

“We were able to show that with robust monitoring we can relate pressure and stress in a volcano’s plumbing system to the underground movement of more viscous magma,” Roman explained. “This will enable monitoring experts to better anticipate the eruption behavior of volcanoes like Kīlauea and to tailor response strategies in advance.”

Reference: “Earthquakes indicated magma viscosity during Kīlauea’s 2018 eruption” by D. C. Roman, A. Soldati, D. B. Dingwell, B. F. Houghton and B. R. Shiro, 7 April 2021, Nature.
DOI: 10.1038/s41586-021-03400-x

The research was supported by an Alexander von Humboldt postdoctoral fellowship, the European Research Council Advanced Grant 834225, the U.S. National Science Foundation, and U.S. Geological Survey Disaster Supplemental Research funding.

Secret to Building Superconducting Quantum Computers With Massive Processing Power
Secret to Building Superconducting Quantum Computers With Massive Processing Power
Superconducting Quantum Bit Light-Conducting Fiber

NIST physicists measured and controlled a superconducting quantum bit (qubit) using light-conducting fiber (indicated by white arrow) instead of metal electrical cables like the 14 shown here inside a cryostat. By using fiber, researchers could potentially pack a million qubits into a quantum computer rather than just a few thousand. Credit: F. Lecocq/NIST

Optical Fiber Could Boost Power of Superconducting Quantum Computers

The secret to building superconducting quantum computers with massive processing power may be an ordinary telecommunications technology — optical fiber. 

Physicists at the National Institute of Standards and Technology (NIST) have measured and controlled a superconducting quantum bit (qubit) using light-conducting fiber instead of metal electrical wires, paving the way to packing a million qubits into a quantum computer rather than just a few thousand. The demonstration is described in the March 25 issue of Nature.

Superconducting circuits are a leading technology for making quantum computers because they are reliable and easily mass produced. But these circuits must operate at cryogenic temperatures, and schemes for wiring them to room-temperature electronics are complex and prone to overheating the qubits. A universal quantum computer, capable of solving any type of problem, is expected to need about 1 million qubits. Conventional cryostats — supercold dilution refrigerators — with metal wiring can only support thousands at the most.

Optical fiber, the backbone of telecommunications networks, has a glass or plastic core that can carry a high volume of light signals without conducting heat. But superconducting quantum computers use microwave pulses to store and process information. So the light needs to be converted precisely to microwaves. 

To solve this problem, NIST researchers combined the fiber with a few other standard components that convert, convey and measure light at the level of single particles, or photons, which could then be easily converted into microwaves. The system worked as well as metal wiring and maintained the qubit’s fragile quantum states.

“I think this advance will have high impact because it combines two totally different technologies, photonics and superconducting qubits, to solve a very important problem,” NIST physicist John Teufel said. “Optical fiber can also carry far more data in a much smaller volume than conventional cable.”

Normally, researchers generate microwave pulses at room temperature and then deliver them through coaxial metal cables to cryogenically maintained superconducting qubits. The new NIST setup used an optical fiber instead of metal to guide light signals to cryogenic photodetectors that converted signals back to microwaves and delivered them to the qubit. For experimental comparison purposes, microwaves could be routed to the qubit through either the photonic link or a regular coaxial line.

The “transmon” qubit used in the fiber experiment was a device known as a Josephson junction embedded in a three-dimensional reservoir or cavity. This junction consists of two superconducting metals separated by an insulator. Under certain conditions an electrical current can cross the junction and may oscillate back and forth. By applying a certain microwave frequency, researchers can drive the qubit between low-energy and excited states (1 or 0 in digital computing). These states are based on the number of Cooper pairs — bound pairs of electrons with opposite properties — that have “tunneled” across the junction. 

The NIST team conducted two types of experiments, using the photonic link to generate microwave pulses that either measured or controlled the quantum state of the qubit. The method is based on two relationships: The frequency at which microwaves naturally bounce back and forth in the cavity, called the resonance frequency, depends on the qubit state. And the frequency at which the qubit switches states depends on the number of photons in the cavity.

Researchers generally started the experiments with a microwave generator. To control the qubit’s quantum state, devices called electro-optic modulators converted microwaves to higher optical frequencies. These light signals streamed through optical fiber from room temperature to 4 kelvins (minus 269 C or minus 452 F) down to 20 millikelvins (thousandths of a kelvin), where they landed in high-speed semiconductor photodetectors, which converted the light signals back to microwaves that were then sent to the quantum circuit.

In these experiments, researchers sent signals to the qubit at its natural resonance frequency, to put it into the desired quantum state. The qubit oscillated between its ground and excited states when there was adequate laser power. 

To measure the qubit’s state, researchers used an infrared laser to launch light at a specific power level through the modulators, fiber and photodetectors to measure the cavity’s resonance frequency.

Researchers first started the qubit oscillating, with the laser power suppressed, and then used the photonic link to send a weak microwave pulse to the cavity. The cavity frequency accurately indicated the qubit’s state 98% of the time, the same accuracy as obtained using the regular coaxial line.

The researchers envision a quantum processor in which light in optical fibers transmits signals to and from the qubits, with each fiber having the capacity to carry thousands of signals to and from the qubit.

Reference: “Control and readout of a superconducting qubit using a photonic link” by F. Lecocq, F. Quinlan, K. Cicak, J. Aumentado, S. A. Diddams and J. D. Teufel, 24 March 2021, Nature.
DOI: 10.1038/s41586-021-03268-x

Physical Inactivity Linked to More Severe COVID-19 Infection and Higher Risk of Death
Physical Inactivity Linked to More Severe COVID-19 Infection and Higher Risk of Death

Hospital Emergency

Surpassed only by advanced age and organ transplant as a risk factor, large study shows

Physical inactivity is linked to more severe COVID-19 infection and a heightened risk of dying from the disease, finds a large US study published online in the British Journal of Sports Medicine.

Patients with COVID-19 who were consistently inactive during the 2 years preceding the pandemic were more likely to be admitted to hospital, to require intensive care, and to die than were patients who had consistently met physical activity guidelines, the findings show.

As a risk factor for severe disease, physical inactivity was surpassed only by advanced age and a history of organ transplant.

Several risk factors for severe COVID-19 infection have been identified, including advanced age, male sex, and certain underlying medical conditions, such as diabetes, obesity, and cardiovascular disease.

But physical inactivity is not one of them, even though it is a well known contributory risk factor for several long term conditions, including those associated with severe COVID-19, point out the researchers.

To explore its potential impact on the severity of the infection, including hospital admission rates, need for intensive care, and death, the researchers compared these outcomes in 48,440 adults with confirmed COVID-19 infection between January and October 2020.

The patients’ average age was 47; nearly two thirds were women (62%). Their average weight (BMI) was 31, which is classified as obese.

Around half had no underlying conditions, including diabetes, COPD, cardiovascular disease, kidney disease, and cancer; nearly 1 in 5 (18%) had only one; and almost a third (32%) had two or more.

All of them had reported their level of regular physical activity at least three times between March 2018 and March 2020 at outpatient clinics. This was classified as consistently inactive (0-10 mins/week); some activity (11-149 mins/week); or consistently meeting physical activity guidelines (150+ mins/week).

Some 7% were consistently meeting physical activity guidelines;15% were consistently inactive, with the remainder reporting some activity.

White patients were most likely to consistently meet physical activity guidelines (10%), followed by Asian patients (7%), Hispanic patients (6%) and African-American patients (5%).

Some 9% of the total were admitted to hospital; around 3% required intensive care; and 2% died. Consistently meeting physical activity guidelines was strongly associated with a reduced risk of these outcomes.

After taking account of potentially influential factors, such as race, age, and underlying medical conditions, patients with COVID-19 who were consistently physically inactive were more than twice as likely to be admitted to the hospital as those who clocked up 150+ minutes of physical activity every week.

They were also 73% more likely to require intensive care, and 2.5 times more likely to die of the infection.

And patients who were consistently inactive were also 20% more likely to be admitted to the hospital, 10% more likely to require intensive care, and 32% more likely to die of their infection than were patients who were doing some physical activity regularly.

This is an observational study, and as such, can’t establish cause. The study also relied on patients’ own assessments of their physical activity. Nor was there any measure of exercise intensity beyond the threshold of ‘moderate to strenuous exercise’ (such as a brisk walk).

But the study was large and ethnically diverse. And the researchers point out: “It is notable that being consistently inactive was a stronger risk factor for severe COVID-19 outcomes than any of the underlying medical conditions and risk factors identified by [The Centers for Disease Control] except for age and a history of organ transplant.

“In fact, physical inactivity was the strongest risk factor across all outcomes, compared with the commonly cited modifiable risk factors, including smoking, obesity, diabetes, hypertension [high blood pressure], cardiovascular disease and cancer.”

They conclude: “We recommend that public health authorities inform all populations that short of vaccination and following public health safety guidelines such as social distancing and mask use, engaging in regular [physical activity] may be the single most important action individuals can take to prevent severe COVID-19 and its complications, including death.

“This message is especially important given the increased barriers to achieving regular [physical activity] during lockdowns and other pandemic restrictions.”

Reference: “Physical inactivity is associated with a higher risk for severe COVID-19 outcomes: a study in 48 440 adult patients” by Robert Sallis, Deborah Rohm Young, Sara Y Tartof, James F Sallis, Jeevan Sall, Qiaowu Li, Gary N Smith and Deborah A Cohen, 13 April 2021, British Journal of Sports Medicine.
DOI: 10.1136/bjsports-2021-104080

Funding: Kaiser Permanente Community Benefits Funds

Female Monkeys Use Males As
Female Monkeys Use Males As “Hired Guns” for Defense Against Predators
Putty Nosed Monkey

Female putty-mosed monkey. Credit: C. Kolopp/WCS

  • Female putty-nosed monkeys use calls just to recruit males when certain predators are detected
  • Results suggest that different “dialects” exist among different populations of monkeys

Researchers with the Wildlife Conservation Society’s (WCS) Congo Program and the Nouabalé-Ndoki Foundation found that female putty-nosed monkeys (Cercopithecus nictitans) use males as “hired guns” to defend from predators such as leopards.

Publishing their results in the journal Royal Society Open Science, the team discovered that female monkeys use alarm calls to recruit males to defend them from predators. The researchers conducted the study among 19 different groups of wild putty-nosed monkeys, a type of forest guenon, in Mbeli Bai, a study area within the forests in Nouabalé-Ndoki National Park, Northern Republic of Congo.

The results promote the idea that females’ general alarm requires males to assess the nature of the threat and that it serves to recruit males to ensure group defense. Females only cease the alarm call when males produce calls associated with anti-predator defense. Results suggest that alarm-calling strategies depend on the sex of the signaler. Females recruit males, who identify themselves while approaching, for protection. Males reassure their female of their quality in predation defense, probably to assure future reproduction opportunities.

Males advertise their commitment to serve as hired guns by emitting general “pyow” calls while approaching the rest of their group – a call containing little information about ongoing events, but cues to male identity, similar as to a signature call. Hearing his “pyow” call during male approaches enables females to identify high quality group defenders already from a distance. This might contribute to long-term male reputation in groups, which would equip females to choose males that ensure their offspring’s survival most reliably.

Said the study’s lead author Frederic Gnepa Mehon of WCS’s Congo Program and the Nouabalé-Ndoki Foundation: “Our observations on other forest guenons suggest that if males do not prove to be good group protectors, they likely have to leave groups earlier than good defenders. To date, it remains unclear whether female guenons have a saying in mate choice, but our current results strongly suggest this possibility.”

In the course of this study, a new call type was consistently recorded named “kek.” They found that the males used the “kek” call when exposed to a moving leopard model created by researchers for field experiments. Previous studies of putty-nosed monkeys in Nigeria never reported “keks.” This new type of call could thus be population-specific or it could be uttered towards moving threats. If “kek” calls are population specific, this could suggest that different “dialects” exist amongst putty-nosed monkeys – a strong indicator for vocal production learning, which is fiercely debated to exist in the animal kingdom.

Said co-author Claudia Stephan Wildlife Conservation Society’s (WCS) Congo Program and the Nouabalé-Ndoki Foundation: “Sexual selection might play a far more important role in the evolution of communication systems than previously thought. In a phylogenetic context, what strategies ultimately drove the evolution of communication in females and in males? Might there even be any parallels to female and male monkeys’ different communication strategies in human language?”

The authors say that current results considerably advanced the understanding of different female and male alarm calling both in terms of sexual dimorphisms in call production and call usage. Interestingly, although males have more complex vocal repertoires than females, the cognitive skills that are necessary to strategically use simple female repertoires seem to be more complex than those necessary to follow male calling strategies. In other words, female putty-nosed monkeys’ alarms may contain little information, but they do so by purpose, namely to facilitate the manipulation of male behavior.

Reference: “Female putty-nosed monkeys (Cercopithecus nictitans) vocally recruit males for predator defence” by Frederic Gnepa Mehon and Claudia Stephan, 17 March 2021, Royal Society Open Science.
DOI: 10.1098/rsos.202135

MIT's Comprehensive Map of the SARS-CoV-2 Genome and Analysis of Nearly 2,000 COVID Mutations
MIT’s Comprehensive Map of the SARS-CoV-2 Genome and Analysis of Nearly 2,000 COVID Mutations
Comprehensive Map of the SARS-CoV-2 Genome

MIT researchers generated what they describe as the most complete gene annotation of the SARS-CoV-2 genome. Credit: MIT News

MIT researchers have determined the virus’ protein-coding gene set and analyzed new mutations’ likelihood of helping the virus adapt.

In early 2020, a few months after the Covid-19 pandemic began, scientists were able to sequence the full genome of SARS-CoV-2, the virus that causes the Covid-19 infection. While many of its genes were already known at that point, the full complement of protein-coding genes was unresolved.

Now, after performing an extensive comparative genomics study, MIT researchers have generated what they describe as the most accurate and complete gene annotation of the SARS-CoV-2 genome. In their study, which was published on May 11, 2021, in Nature Communications, they confirmed several protein-coding genes and found that a few others that had been suggested as genes do not code for any proteins.

“We were able to use this powerful comparative genomics approach for evolutionary signatures to discover the true functional protein-coding content of this enormously important genome,” says Manolis Kellis, who is the senior author of the study and a professor of computer science in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) as well as a member of the Broad Institute of MIT and Harvard.

The research team also analyzed nearly 2,000 mutations that have arisen in different SARS-CoV-2 isolates since it began infecting humans, allowing them to rate how important those mutations may be in changing the virus’ ability to evade the immune system or become more infectious.

Comparative genomics

The SARS-CoV-2 genome consists of nearly 30,000 RNA bases. Scientists have identified several regions known to encode protein-coding genes, based on their similarity to protein-coding genes found in related viruses. A few other regions were suspected to encode proteins, but they had not been definitively classified as protein-coding genes.

To nail down which parts of the SARS-CoV-2 genome actually contain genes, the researchers performed a type of study known as comparative genomics, in which they compare the genomes of similar viruses. The SARS-CoV-2 virus belongs to a subgenus of viruses called Sarbecovirus, most of which infect bats. The researchers performed their analysis on SARS-CoV-2, SARS-CoV (which caused the 2003 SARS outbreak), and 42 strains of bat sarbecoviruses.

Kellis has previously developed computational techniques for doing this type of analysis, which his team has also used to compare the human genome with genomes of other mammals. The techniques are based on analyzing whether certain DNA or RNA bases are conserved between species, and comparing their patterns of evolution over time.

Using these techniques, the researchers confirmed six protein-coding genes in the SARS-CoV-2 genome in addition to the five that are well established in all coronaviruses. They also determined that the region that encodes a gene called ORF3a also encodes an additional gene, which they name ORF3c. The gene has RNA bases that overlap with ORF3a but occur in a different reading frame. This gene-within-a-gene is rare in large genomes, but common in many viruses, whose genomes are under selective pressure to stay compact. The role for this new gene, as well as several other SARS-CoV-2 genes, is not known yet.

The researchers also showed that five other regions that had been proposed as possible genes do not encode functional proteins, and they also ruled out the possibility that there are any more conserved protein-coding genes yet to be discovered.

“We analyzed the entire genome and are very confident that there are no other conserved protein-coding genes,” says Irwin Jungreis, lead author of the study and a CSAIL research scientist. “Experimental studies are needed to figure out the functions of the uncharacterized genes, and by determining which ones are real, we allow other researchers to focus their attention on those genes rather than spend their time on something that doesn’t even get translated into protein.”

The researchers also recognized that many previous papers used not only incorrect gene sets, but sometimes also conflicting gene names. To remedy the situation, they brought together the SARS-CoV-2 community and presented a set of recommendations for naming SARS-CoV-2 genes, in a separate paper published a few weeks ago in Virology.

Fast evolution

In the new study, the researchers also analyzed more than 1,800 mutations that have arisen in SARS-CoV-2 since it was first identified. For each gene, they compared how rapidly that particular gene has evolved in the past with how much it has evolved since the current pandemic began.

They found that in most cases, genes that evolved rapidly for long periods of time before the current pandemic have continued to do so, and those that tended to evolve slowly have maintained that trend. However, the researchers also identified exceptions to these patterns, which may shed light on how the virus has evolved as it has adapted to its new human host, Kellis says.

In one example, the researchers identified a region of the nucleocapsid protein, which surrounds the viral genetic material, that had many more mutations than expected from its historical evolution patterns. This protein region is also classified as a target of human B cells. Therefore, mutations in that region may help the virus evade the human immune system, Kellis says.

“The most accelerated region in the entire genome of SARS-CoV-2 is sitting smack in the middle of this nucleocapsid protein,” he says. “We speculate that those variants that don’t mutate that region get recognized by the human immune system and eliminated, whereas those variants that randomly accumulate mutations in that region are in fact better able to evade the human immune system and remain in circulation.”

The researchers also analyzed mutations that have arisen in variants of concern, such as the B.1.1.7 strain from England, the P.1 strain from Brazil, and the B.1.351 strain from South Africa. Many of the mutations that make those variants more dangerous are found in the spike protein, and help the virus spread faster and avoid the immune system. However, each of those variants carries other mutations as well.

“Each of those variants has more than 20 other mutations, and it’s important to know which of those are likely to be doing something and which aren’t,” Jungreis says. “So, we used our comparative genomics evidence to get a first-pass guess at which of these are likely to be important based on which ones were in conserved positions.”

This data could help other scientists focus their attention on the mutations that appear most likely to have significant effects on the virus’ infectivity, the researchers say. They have made the annotated gene set and their mutation classifications available in the University of California at Santa Cruz Genome Browser for other researchers who wish to use it.

“We can now go and actually study the evolutionary context of these variants and understand how the current pandemic fits in that larger history,” Kellis says. “For strains that have many mutations, we can see which of these mutations are likely to be host-specific adaptations, and which mutations are perhaps nothing to write home about.”

Reference: “SARS-CoV-2 gene content and COVID-19 mutation impact by comparing 44 Sarbecovirus genomes” by Irwin Jungreis, Rachel Sealfon and Manolis Kellis, 11 May 2021, Nature Communications.
DOI: 10.1038/s41467-021-22905-7

The research was funded by the National Human Genome Research Institute and the National Institutes of Health. Rachel Sealfon, a research scientist at the Flatiron Institute Center for Computational Biology, is also an author of the paper.

Searching for Signs of Life on Mars: Perseverance's Robotic Arm Starts Conducting Science
Searching for Signs of Life on Mars: Perseverance’s Robotic Arm Starts Conducting Science
Mastcam-Z Views 'Santa Cruz' on Mars

Mastcam-Z Views ‘Santa Cruz’ on Mars: NASA’s Perseverance Mars rover used its dual-camera Mastcam-Z imager to capture this image of “Santa Cruz,” a hill about 1.5 miles (2.5 kilometers) away from the rover, on April 29, 2021, the 68th Martian day, or sol, of the mission. The entire scene is inside of Mars’ Jezero Crater; the crater’s rim can be seen on the horizon line beyond the hill. Credit: NASA/JPL-Caltech/ASU/MSSS

NASA’s newest Mars rover is beginning to study the floor of an ancient crater that once held a lake.

NASA’s Perseverance rover has been busy serving as a communications base station for the Ingenuity Mars Helicopter and documenting the rotorcraft’s historic flights. But the rover has also been busy focusing its science instruments on rocks that lay on the floor of Jezero Crater.

What insights they turn up will help scientists create a timeline of when an ancient lake formed there, when it dried, and when sediment began piling up in the delta that formed in the crater long ago. Understanding this timeline should help date rock samples – to be collected later in the mission – that might preserve a record of ancient microbes.

Perseverance Mastcam-Z Images Intriguing Rocks

Perseverance’s Mastcam-Z Images Intriguing Rocks: NASA’s Perseverance rover viewed these rocks with its Mastcam-Z imager on April 27, 2021. Credit: NASA/JPL-Caltech/ASU/MSSS

A camera called WATSON on the end of the rover’s robotic arm has taken detailed shots of the rocks. A pair of zoomable cameras that make up the Mastcam-Z imager on the rover’s “head” has also surveyed the terrain. And a laser instrument called SuperCam has zapped some of the rocks to detect their chemistry. These instruments and others allow scientists to learn more about Jezero Crater and to home in on areas they might like to study in greater depth.

One important question scientists want to answer: whether these rocks are sedimentary (like sandstone) or igneous (formed by volcanic activity). Each type of rock tells a different kind of story. Some sedimentary rocks – formed in the presence of water from rock and mineral fragments like sand, silt, and clay – are better suited to preserving biosignatures, or signs of past life. Igneous rocks, on the other hand, are more precise geological clocks that allow scientists to create an accurate timeline of how an area formed.

NASA Perseverance Mars Rover Watson Focus Test

NASA’s Perseverance Mars rover used the WATSON camera on the end of its robotic arm to conduct a focus test on May 10, 2021, the 79th Martian day, or sol, of the mission. Credit: NASA/JPL-Caltech/MSSS

One complicating factor is that the rocks around Perseverance have been eroded by wind over time and covered with younger sand and dust. On Earth, a geologist might trudge into the field and break a rock sample open to get a better idea of its origins. “When you look inside a rock, that’s where you see the story,” said Ken Farley of Caltech, Perseverance’s project scientist.

While Perseverance doesn’t have a rock hammer, it does have other ways to peer past millennia’s worth of dust. When scientists find a particularly enticing spot, they can reach out with the rover’s arm and use an abrader to grind and flatten a rock’s surface, revealing its internal structure and composition. Once they’ve done that, the team gathers more detailed chemical and mineralogical information using arm instruments called PIXL (Planetary Instrument for X-ray Lithochemistry) and SHERLOC (Scanning for Habitable Environments with Raman & Luminescence for Organics & Chemicals).

NASA's Perseverance Mars Rover Using PIXL

Perseverance’s PIXL at Work on Mars (Illustration): In this illustration, NASA’s Perseverance Mars rover uses the Planetary Instrument for X-ray Lithochemistry (PIXL). Located on the turret at the end of the rover’s robotic arm, the X-ray spectrometer will help search for signs of ancient microbial life in rocks. Credit: NASA/JPL-Caltech.

“The more rocks you look at, the more you know,” Farley said.

And the more the team knows, the better samples they can ultimately collect with the drill on the rover’s arm. The best ones will be stored in special tubes and deposited in collections on the planet’s surface for eventual return to Earth.

More About the Mission

A key objective for Perseverance’s mission on Mars is astrobiology, including the search for signs of ancient microbial life. The rover will characterize the planet’s geology and past climate, pave the way for human exploration of the Red Planet, and be the first mission to collect and cache Martian rock and regolith (broken rock and dust).

Subsequent NASA missions, in cooperation with ESA (European Space Agency), would send spacecraft to Mars to collect these sealed samples from the surface and return them to Earth for in-depth analysis.

The Mars 2020 Perseverance mission is part of NASA’s Moon to Mars exploration approach, which includes Artemis missions to the Moon that will help prepare for human exploration of the Red Planet.

JPL, which is managed for NASA by Caltech in Pasadena, California, built and manages operations of the Perseverance rover.

Magnetoelectric Chips to Power a New Generation of More Efficient Computing Devices
Magnetoelectric Chips to Power a New Generation of More Efficient Computing Devices

Advanced Computer Chip Concept

Harnessing the Hum of Fluorescent Lights for More Efficient Computing

The property that makes fluorescent lights buzz could power a new generation of more efficient computing devices that store data with magnetic fields, rather than electricity.

A team led by University of Michigan researchers has developed a material that’s at least twice as “magnetostrictive” and far less costly than other materials in its class. In addition to computing, it could also lead to better magnetic sensors for medical and security devices.

Magnetostriction, which causes the buzz of fluorescent lights and electrical transformers, occurs when a material’s shape and magnetic field are linked — that is, a change in shape causes a change in magnetic field. The property could be key to a new generation of computing devices called magnetoelectrics.

Magnetoelectric chips could make everything from massive data centers to cell phones far more energy efficient, slashing the electricity requirements of the world’s computing infrastructure.

Made of a combination of iron and gallium, the material is detailed in a paper published today (May 12, 2021) in Nature Communication. The team is led by U-M materials science and engineering professor John Heron and includes researchers from Intel; Cornell University; University of California, Berkeley; University of Wisconsin; Purdue University and elsewhere.

Magnetoelectric devices use magnetic fields instead of electricity to store the digital ones and zeros of binary data. Tiny pulses of electricity cause them to expand or contract slightly, flipping their magnetic field from positive to negative or vice versa. Because they don’t require a steady stream of electricity, as today’s chips do, they use a fraction of the energy.

“A key to making magnetoelectric devices work is finding materials whose electrical and magnetic properties are linked.” Heron said. “And more magnetostriction means that a chip can do the same job with less energy.”

Cheaper magnetoelectric devices with a tenfold improvement

Most of today’s magnetostrictive materials use rare-earth elements, which are too scarce and costly to be used in the quantities needed for computing devices. But Heron’s team has found a way to coax high levels of magnetostriction from inexpensive iron and gallium.

Ordinarily, explains Heron, the magnetostriction of iron-gallium alloy increases as more gallium is added. But those increases level off and eventually begin to fall as the higher amounts of gallium begin to form an ordered atomic structure.

So the research team used a process called low-temperature molecular-beam epitaxy to essentially freeze atoms in place, preventing them from forming an ordered structure as more gallium was added. This way, Heron and his team were able to double the amount of gallium in the material, netting a tenfold increase in magnetostriction compared to unmodified iron-gallium alloys.

“Low-temperature molecular-beam epitaxy is an extremely useful technique — it’s a little bit like spray painting with individual atoms,” Heron said. “And ‘spray painting’ the material onto a surface that deforms slightly when a voltage is applied also made it easy to test its magnetostrictive properties.”

Researchers are working with Intel’s MESO program

The magnetoelectric devices made in the study are several microns in size — large by computing standards. But the researchers are working with Intel to find ways to shrink them to a more useful size that will be compatible with the company’s magnetoelectric spin-orbit device (or MESO) program, one goal of which is to push magnetoelectric devices into the mainstream.

“Intel is great at scaling things and at the nuts and bolts of making a technology actually work at the super-small scale of a computer chip,” Heron said. “They’re very invested in this project and we’re meeting with them regularly to get feedback and ideas on how to ramp up this technology to make it useful in the computer chips that they call MESO.”

While a device that uses the material is likely decades away, Heron’s lab has filed for patent protection through the U-M Office of Technology Transfer.

Reference: “Engineering new limits to magnetostriction through metastability in iron-gallium alloys” by P. B. Meisenheimer, R. A. Steinhardt, S. H. Sung, L. D. Williams, S. Zhuang, M. E. Nowakowski, S. Novakov, M. M. Torunbalci, B. Prasad, C. J. Zollner, Z. Wang, N. M. Dawley, J. Schubert, A. H. Hunter, S. Manipatruni, D. E. Nikonov, I. A. Young, L. Q. Chen, J. Bokor, S. A. Bhave, R. Ramesh, J.-M. Hu, E. Kioupakis, R. Hovden, D. G. Schlom and J. T. Heron, 12 May 2021, Nature Communications.
DOI: 10.1038/s41467-021-22793-x

The research is supported by IMRA America and the National Science Foundation (grant numbers NNCI-1542081, EEC-1160504 DMR-1719875 and DMR-1539918).

Other researchers on the paper include U-M associate professor of materials science and engineering Emmanouil Kioupakis; U-M assistant professor of materials science and engineering Robert Hovden; and U-M graduate student research assistants Peter Meisenheimer and Suk Hyun Sung.

Brand New Physics of Superconducting Metals – Busted
Brand New Physics of Superconducting Metals – Busted

Atoms Electrons Concept

Lancaster scientists have demonstrated that other physicists’ recent “discovery” of the field effect in superconductors is nothing but hot electrons after all.

A team of scientists in the Lancaster Physics Department has found new and compelling evidence that the observation of the field effect in superconducting metals by another group can be explained by a simple mechanism involving the injection of the electrons, without the need for novel physics.

Dr. Sergey Kafanov, who initiated this experiment, said: “Our results unambiguously refute the claim of the electrostatic field effect claimed by the other group. This gets us back on the ground and helps maintain the health of the discipline.”

The experimental team also includes Ilia Golokolenov, Andrew Guthrie, Yuri Pashkin, and Viktor Tsepelin.

Their work is published in the latest issue of Nature Communications.

Superconducting Circuit Information Processing

Superconducting circuits find applications in sensing and information processing. Credit: Lancaster University

When certain metals are cooled to a few degrees above absolute zero, their electrical resistance vanishes — a striking physical phenomenon known as superconductivity. Many metals, including vanadium, which was used in the experiment, are known to exhibit superconductivity at sufficiently low temperatures.

For decades it was thought that the exceptionally low electrical resistance of superconductors should make them practically impervious to static electric fields, owing to the way the charge carriers can easily arrange themselves to compensate for any external field.

It therefore came as a shock to the physics community when a number of recent publications claimed that sufficiently strong electrostatic fields could affect superconductors in nanoscale structures — and attempted to explain this new effect with corresponding new physics. A related effect is well known in semiconductors and underpins the entire semiconductor industry.

The Lancaster team embedded a similar nanoscale device into a microwave cavity, allowing them to study the alleged electrostatic phenomenon at much shorter timescales than previously investigated. At short timescales, the team could see a clear increase in the noise and energy loss in the cavity — the properties strongly associated with the device temperature. They propose that at intense electric fields, high-energy electrons can “jump” into the superconductor, raising the temperature and therefore increasing the dissipation.

This simple phenomenon can concisely explain the origin of the “electrostatic field effect” in nanoscale structures, without any new physics.

Reference: 12 May 2021, Nature Communications.
DOI: 10.1038/s41467-021-22998-0

Pink Drinks Can Help You Run Faster and Further Compared to Clear Drinks
Pink Drinks Can Help You Run Faster and Further Compared to Clear Drinks

Runner Sports Drink

A new study led by the Center for Nutraceuticals in the University of Westminster shows that pink drinks can help to make you run faster and further compared to clear drinks.

The researchers found that a pink drink can increase exercise performance by 4.4 percent and can also increase a ‘feel good’ effect which can make exercise seem easier.

The study, published in the journal Frontiers in Nutrition, is the first investigation to assess the effect of drink color on exercise performance and provides the potential to open a new avenue of future research in the field of sports drinks and exercise.

During the study participants were asked to run on a treadmill for 30 minutes at a self-selected speed ensuring their rate of exertion remained consistent. Throughout the exercise they rinsed their mouths with either a pink artificially sweetened drink that was low in calories or a clear drink that was also artificially sweetened and low in calories.

Both drinks were exactly the same and only differed in appearance — the researchers added food dye to the pink drink to change the color.

The researchers chose pink as it is associated with perceived sweetness and therefore increases expectations of sugar and carbohydrate intake.

Previous studies have also shown that rinsing the mouth with carbohydrates can improve exercise performance by reducing the perceived intensity of the exercise, so the researchers wanted to assess whether rinsing with a pink drink that had no carbohydrate stimulus could elicit similar benefits through a potential placebo effect.

The results show that the participants ran an average 212 meters further with the pink drink while their mean speed during the exercise test also increased by 4.4 percent. Feelings of pleasure were also enhanced meaning participants found running more enjoyable.

Future exploratory research is necessary to find out whether the proposed placebo effect causes a similar activation to the reward areas of the brain that are commonly reported when rinsing the mouth with carbohydrates.

Talking about the study, Dr. Sanjoy Deb, corresponding author on the paper from the University of Westminster, said: “The influence of color on athletic performance has received interest previously, from its effect on a sportsperson’s kit to its impact on testosterone and muscular power. Similarly, the role of color in gastronomy has received widespread interest, with research published on how visual cues or color can affect subsequent flavor perception when eating and drinking.

“The findings from our study combine the art of gastronomy with performance nutrition, as adding a pink colorant to an artificially sweetened solution not only enhanced the perception of sweetness, but also enhanced feelings of pleasure, self-selected running speed, and distance covered during a run.”

Reference: 12 May 2021, Frontiers in Nutrition.
DOI: 10.3389/fnut.2021.678105

James Webb Telescope's Golden Mirror Wings Open for the Last Time on Earth
James Webb Telescope’s Golden Mirror Wings Open for the Last Time on Earth

Webb Telescope's Golden Mirror Wings

For the last time while it is on Earth, the world’s largest and most powerful space science telescope opened its iconic primary mirror. This event marked a key milestone in preparing the observatory for launch later this year.

As part of NASA’s James Webb Space Telescope’s final tests, the 6.5 meter (21 feet 4 inch) mirror was commanded to fully expand and lock itself into place, just like it would in space. The conclusion of this test represents the team’s final checkpoint in a long series of tests designed to ensure Webb’s 18 hexagonal mirrors are prepared for a long journey in space, and a life of profound discovery. After this, all of Webb’s many movable parts will have confirmed in testing that they can perform their intended operations after being exposed to the expected launch environment.

This video shows the James Webb Space Telescope’s mirrors during their long string of tests, from individual segments to the final tests of the assembled mirror. Credit: NASA’s Goddard Space Flight Center Michael P. Menzel (AIMM): Producer Michael McClare (KBRwyle): Lead Videographer Sophia Roberts (AIMM): Videographer Michael P. Menzel (AIMM): Video Editor

“The primary mirror is a technological marvel. The lightweight mirrors, coatings, actuators and mechanisms, electronics and thermal blankets when fully deployed form a single precise mirror that is truly remarkable,” said Lee Feinberg, optical telescope element manager for Webb at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “This is not just the final deployment test sequence that the team has pulled off to prepare Webb for a life in space, but it means when we finish, that the primary mirror will be locked in place for launch. It’s humbling to think about the hundreds of dedicated people across the entire country who worked so hard to design and build the primary mirror, and now to know launch is so close.”

Unfurling James Webb Telescope Mirror

The process of deploying, moving, expanding and unfurling all of Webb’s many movable pieces after they have been exposed to a simulated launch is the best way to ensure they will perform as intended once in space. Credit: NASA/Chris Gunn

Making the testing conditions close to what Webb will experience in space helps to ensure the observatory is fully prepared for its science mission one million miles away from Earth.

Commands to unlatch and deploy the side panels of the mirror were relayed from Webb’s testing control room at Northrop Grumman, in Redondo Beach, California. The software instructions sent, and the mechanisms that operated are the same as those used in space. Special gravity offsetting equipment was attached to Webb to simulate the zero-gravity environment in which its complex mechanisms will operate. All of the final thermal blanketing and innovative shielding designed to protect its mirrors and instruments from interference were in place during testing.

To observe objects in the distant cosmos, and to do science that’s never been done before, Webb’s mirror needs to be so large that it cannot fit inside any rocket available in its fully extended form. Like a piece of origami artwork, Webb contains many movable parts that have been specifically designed to fold themselves to a compact formation that is considerably smaller than when the observatory is fully deployed. This allows it to just barely fit inside a 16-foot (5-meter) rocket fairing, with little room to spare.

James Webb Telescope Mirror Final Earth

The conclusion of this test represents the team’s final in a long series of checkpoints designed to ensure Webb’s 18 hexagonal mirrors are prepared for a long life of profound discovery. Credit: NASA/Chris Gunn

To deploy, operate and bring its golden mirrors into focus requires 132 individual actuators and motors in addition to complex backend software to support it. A proper deployment in space is critically important to the process of fine-tuning Webb’s individual mirrors into one functional and massive reflector. Once the wings are fully extended and in place, extremely precise actuators on the backside of the mirrors position and bend or flex each mirror into a specific prescription. Testing of each actuator and their expected movements was completed in a final functional test earlier this year. 

“Pioneering space observatories like Webb only come to fruition when dedicated individuals work together to surmount the challenge of building something that has never been done before. I am especially proud of our teams that built Webb’s mirrors, and the complex back-end electronics and software that will empower it to see deep into space with extreme precision. It has been very interesting, and extremely rewarding to see it all come together. The completion of this last test on its mirrors is especially exciting because of how close we are to launch later this year,” said Ritva Keski-Kuha, deputy optical telescope element manager for Webb at Goddard. 

Following this test engineers will immediately move on to tackle Webb’s final few tests, which include extending and then restowing two radiator assemblies that help the observatory cool down, and one full extension and restowing of its deployable tower.

The James Webb Space Telescope will be the world’s premier space science observatory when it launches in 2021. Webb will solve mysteries in our solar system, look beyond to distant worlds around other stars, and probe the mysterious structures and origins of our universe and our place in it. Webb is an international program led by NASA with its partners, ESA (European Space Agency) and the Canadian Space Agency.

Breaking Heisenberg: Evading the Uncertainty Principle in Quantum Physics
Breaking Heisenberg: Evading the Uncertainty Principle in Quantum Physics
Quantum Entanglement Schematic

Schematic of the entangled drumheads. Credit: Aalto University

New technique gets around 100-year-old rule of quantum physics for the first time.

The uncertainty principle, first introduced by Werner Heisenberg in the late 1920’s, is a fundamental concept of quantum mechanics. In the quantum world, particles like the electrons that power all electrical products can also behave like waves. As a result, particles cannot have a well-defined position and momentum simultaneously. For instance, measuring the momentum of a particle leads to a disturbance of position, and therefore the position cannot be precisely defined.

In recent research, published in Science, a team led by Prof. Mika Sillanpää at Aalto University in Finland has shown that there is a way to get around the uncertainty principle. The team included Dr. Matt Woolley from the University of New South Wales in Australia, who developed the theoretical model for the experiment.

Instead of elementary particles, the team carried out the experiments using much larger objects: two vibrating drumheads one-fifth of the width of a human hair. The drumheads were carefully coerced into behaving quantum mechanically.

“In our work, the drumheads exhibit a collective quantum motion. The drums vibrate in an opposite phase to each other, such that when one of them is in an end position of the vibration cycle, the other is in the opposite position at the same time. In this situation, the quantum uncertainty of the drums’ motion is canceled if the two drums are treated as one quantum-mechanical entity,” explains the lead author of the study, Dr. Laure Mercier de Lepinay.

This means that the researchers were able to simultaneously measure the position and the momentum of the two drumheads — which should not be possible according to the Heisenberg uncertainty principle. Breaking the rule allows them to be able to characterize extremely weak forces driving the drumheads.

“One of the drums responds to all the forces of the other drum in the opposing way, kind of with a negative mass,” Sillanpää says.

Furthermore, the researchers also exploited this result to provide the most solid evidence to date that such large objects can exhibit what is known as quantum entanglement. Entangled objects cannot be described independently of each other, even though they may have an arbitrarily large spatial separation. Entanglement allows pairs of objects to behave in ways that contradict classical physics, and is the key resource behind emerging quantum technologies. A quantum computer can, for example, carry out the types of calculations needed to invent new medicines much faster than any supercomputer ever could.

In macroscopic objects, quantum effects like entanglement are very fragile, and are destroyed easily by any disturbances from their surrounding environment. Therefore, the experiments were carried out at a very low temperature, only a hundredth a degree above absolute zero at -273 degrees.

In the future, the research group will use these ideas in laboratory tests aiming at probing the interplay of quantum mechanics and gravity. The vibrating drumheads may also serve as interfaces for connecting nodes of large-scale, distributed quantum networks.

Reference: “Quantum mechanics–free subsystem with mechanical oscillators” by Laure Mercier de Lépinay, Caspar F. Ockeloen-Korppi, Matthew J. Woolley and Mika A. Sillanpää, 7 May 2021, Science.
DOI: 10.1126/science.abf5389

Sillanpää’s group is part of the national Centre of Excellence, Quantum Technology Finland (QTF). The research was carried out using OtaNano, a national open access research infrastructure providing state-of-the-art working environment for competitive research in nanoscience and -technology, and in quantum technologies. OtaNano is hosted and operated by Aalto University and VTT.