Physics World https://physicsworld.com/a/the-mechanics-of-squirting-cucumbers-revealed/ Fri, 29 Nov 2024 16:25:05 +0000 en-GB Copyright by IOP Publishing Ltd and individual contributors hourly 1 https://wordpress.org/?v=6.6.1 Physics is full of captivating stories, from ongoing endeavours to explain the cosmos to ingenious innovations that shape the world around us. In the Physics World Stories podcast, Andrew Glester talks to the people behind some of the most intriguing and inspiring scientific stories. Listen to the podcast to hear from a diverse mix of scientists, engineers, artists and other commentators. Find out more about the stories in this podcast by visiting the Physics World website. If you enjoy what you hear, then also check out the Physics World Weekly podcast, a science-news podcast presented by our award-winning science journalists. Physics World false episodic Physics World dens.milne@ioppublishing.org Copyright by IOP Publishing Ltd and individual contributors Copyright by IOP Publishing Ltd and individual contributors podcast Physics World Stories Podcast Physics World https://physicsworld.com/wp-content/uploads/2021/01/PW-podcast-logo-STORIES-resized.jpg https://physicsworld.com TV-G Monthly The mechanics of squirting cucumbers revealed https://physicsworld.com/a/the-mechanics-of-squirting-cucumbers-revealed/ Fri, 29 Nov 2024 16:00:44 +0000 https://physicsworld.com/?p=118516 The researchers revealed that the mechanism has been fine-tuned to ensure optimal seed dispersal

The post The mechanics of squirting cucumbers revealed appeared first on Physics World.

]]>

The plant kingdom is full of intriguing ways to distribute seeds such as the dandelion pappus effortlessly drifting on air currents to the ballistic nature of fern sporangia.

Not to be outdone, the squirting cucumber (Ecballium elaterium), which is native to the Mediterranean and is often regarded as a weed, has its own unique way of ejecting seeds.

When ripe, the ovoid-shaped fruits detach from the stem and as it does so explosively ejects seeds in a high-pressure jet of mucilage.

The process, which lasts just 30 milliseconds, launches the seeds at more than 20 metres per second with some landing 10 metres away.

Researchers in the UK have, for the first time, revealed the mechanism behind the squirt by carrying out high-speed videography, computed tomography scans and mathematical modelling.

“The first time we inspected this plant in the Botanic Garden, the seed launch was so fast that we weren’t sure it had happened,” recalls Oxford University mathematical biologist Derek Moulton. “It was very exciting to dig in and uncover the mechanism of this unique plant.”

The researchers found that in the weeks leading up to the ejection, fluid builds up inside the fruits so they become pressurised. Then just before seed dispersal, some of this fluid moves from the fruit to the stem, making it longer and stiffer.

This process crucially causes the fruit to rotate from being vertical to close to an angle of 45 degrees, improving the launch angle for the seeds.

During the first milliseconds of ejection, the tip of the stem holding the fruit then recoils away causing the fruit to counter-rotate and detach. As it does so, the pressure inside the fruit causes the seeds to eject at high speed.

By changing certain parameters in the model, such as the stiffness of the stem, reveals that the mechanism has been fine-tuned to ensure optimal seed dispersal. For example, a thicker or stiffer stem would result in the seeds being launched horizontally and distributed over a narrower area.

According to Manchester University physicist Finn Box, the findings could be used for more effective drug delivery systems “where directional release is crucial”.

The post The mechanics of squirting cucumbers revealed appeared first on Physics World.

]]>
Blog The researchers revealed that the mechanism has been fine-tuned to ensure optimal seed dispersal https://physicsworld.com/wp-content/uploads/2024/11/Squirty-cucumber-photo.jpg
From the blackboard to the boardroom: why university is a great place to become an entrepreneur https://physicsworld.com/a/from-the-blackboard-to-the-boardroom-why-university-is-a-great-place-to-become-an-entrepreneur/ Fri, 29 Nov 2024 11:00:57 +0000 https://physicsworld.com/?p=118137 Robert Phillips argues that whatever your career ambitions, entrepreneurship skills will set you up for success

The post From the blackboard to the boardroom: why university is a great place to become an entrepreneur appeared first on Physics World.

]]>
What does an idea need to change the world? Physics drives scientific advancements in healthcare, green energy, sustainable materials and many other applications. However, to bridge the gap between research and real-world applications, physicists need to be equipped with entrepreneurship skills.

Many students dream of using their knowledge and passion for physics to change the world, but when it comes to developing your own product, it can be hard to know where to start. That’s where my job comes in – I have been teaching scientists and engineers entrepreneurship for more than 20 years.

Several of the world’s most successful companies, including Sony, Texas Instruments, Intel and Tesla Motors, were founded by physicists, and there are many contemporary examples too. For example, Unitary, an AI company that identifies misinformation and deepfakes, was founded by Sasha Haco, who has a PhD in theoretical physics. In materials science, Aruna Zhuma is the co-founder of Global Graphene Group, which manufactures single layers of graphene oxide for use in electronics. Zhuma has nearly 500 patents, the second largest number of any inventor in the field.

In the last decade quantum technology, which encompasses computing, sensing and communications, has spawned hundreds of start-ups, often spun out from university research. This includes cybersecurity firm ID Quantique, super sensitive detectors from Single Quantum, and quantum computing from D-Wave. Overall, about 8–9% of students in the UK start businesses straight after they graduate, with just over half (58%) of these graduate entrepreneurs founding firms in their subject area.

However, even if you aren’t planning to set up your own business, entrepreneurship skills will be important no matter what you do with your degree. If you work in industry you will need to spot trends, understand customers’ needs and contribute to products and services. In universities, promotion often requires candidates to demonstrate “knowledge transfer”, which means working with partners outside academia.

Taking your ideas to the next level

The first step of kick-starting your entrepreneurship journey is to evaluate your existing experience and goals. Do you already have an idea that you want to take forward, or just want to develop skills that will broaden you career options?

If you’re exploring the possibilities of entrepreneurship you should look for curricular modules at your university. These are normally tailored to those with no previous experience and cover topics such as opportunity spotting, market research, basic finance, team building and intellectual property. In addition, in the UK at least, many postgraduate centres for doctoral training (CDTs) now offer modules in business and entrepreneurship as part of their training programmes. These courses sometimes give students the opportunity to take part in live company projects, which are a great way to gain skills.

You should also look out for extracurricular opportunities, from speaker events and workshops to more intensive bootcamps, competitions and start-up weekends. There is no mark or grade for these events, so they allow students to take risks and experiment.

Like any kind of research, commercializing physics requires resources such as equipment and laboratory space. For early-stage founders, access to business incubators – organizations that provide shared facilities – is invaluable. You would use an incubator at a relatively early stage to finalize your product, and they can be found in many universities.

Accelerator programmes, which aim to fast-track your idea once you have a product ready and usually run for a defined length of time, can also be beneficial. For example, the University of Southampton has the Future Worlds Programme based in the physical sciences faculty. Outside academia, the European Space Agency has incubators for space technology ideas at locations throughout Europe, and the Institute of Physics also has workspace and an accelerator programme for recently graduated physicists and especially welcomes quantum technology businesses. The Science and Technology Facilities Council (STFC) CERN Business Incubation Centre focuses on high-energy physics ideas and grants access to equipment that would be otherwise unaffordable for a new start-up.

More accelerator programmes supporting physics ideas include Duality, which is a Chicago-based 12-month accelerator programme for quantum ideas; Quantum Delta NL, based in the Netherlands, which provides programmes and shared facilities for quantum research; and Techstars Industries of the Future, which has locations worldwide.

Securing your future

It’s the multimillion-pound deals that make headlines but to get to that stage you will need to gain investors’ confidence, securing smaller funds to take your idea forward step-by-step. This could be used to protect your intellectual property with a patent, make a prototype or road test your technology.

Since early-stage businesses are high risk, this money is likely to come from grants and awards, with commercial investors such as venture capital or banks holding back until they see the idea can succeed. Funding can come from government agencies like the STFC in the UK, or US government scheme America’s Seed Fund. These grants are for encouraging innovation, applied research and for finding disruptive new technology, and no return is expected. Early-stage commercial funding might come from organizations such as Seedcamp, and some accelerator programmes offer funding, or at least organize a “demo day” on completion where you can showcase your venture to potential investors.

Group of students sat at a round table with large sheets of paper and Post-it notes

While you’re a student, you can take advantage of the venture competitions that run at many universities, where students pitch an idea to a panel of judges. The prizes can be significant, ranging from £10k to £100k, and often come with extra support such as lab space, mentoring and help filing patents. Some of these programmes are physics-specific, for example the Eli and Britt Harari Enterprise Award at the University of Manchester, which is sponsored by physics graduate Eli Harari (founder of SanDisc) awards funding for graphene-related ideas.

Finally, remember that physics innovations don’t always happen in the lab. Theoretical physicist Stephen Wolfram founded Wolfram Research in 1988, which makes computational technology including the answer engine Wolfram Alpha.

Making the grade

There are many examples of students and recent graduates making a success from entrepreneurship. Wai Lau is a Manchester physics graduate who also has a master’s of enterprise degree. He started a business focused on digital energy management, identifying energy waste, while learning about entrepreneurship. His business Cloud Enterprise has now branched out to a wider range of digital products and services.

Computational physics graduate Gregory Mead at Imperial College London started Musicmetric, which uses complex data analytics to keep track of and rank music artists and is used by music labels and artists. He was able to get funding from Imperial Innovations after making a prototype and Musicmetric was eventually bought by Apple.

AssestCool Thermal Metaphotonics technology cools overhead power lines reducing power losses using novel coatings. It entered the Venture Further competition at the University of Manchester and has now had a £2.25m investment from Gritstone Capital.

Entrepreneurship skills are being increasingly recognized as necessary for physics graduates. In the UK, the IOP Degree Accreditation Framework, the standard for physics degrees, expects students to have “business awareness, intellectual property, digital media and entrepreneurship skills”.

Thinking about taking the leap into business can be daunting, but university is the ideal time to think about entrepreneurship. You have nothing to lose and plenty of support available.

The post From the blackboard to the boardroom: why university is a great place to become an entrepreneur appeared first on Physics World.

]]>
Feature Robert Phillips argues that whatever your career ambitions, entrepreneurship skills will set you up for success https://physicsworld.com/wp-content/uploads/2024/11/2024-11-CAR-Phillips-university-students-1588289996-iStock_Drazen-Zigic.jpg
Astronomers can play an important role in explaining the causes and consequences of climate change, says astrophysicist https://physicsworld.com/a/astronomers-can-play-an-important-role-in-explaining-the-causes-and-consequences-of-climate-change-says-astrophysicist/ Thu, 28 Nov 2024 15:09:22 +0000 https://physicsworld.com/?p=118514 This podcast also looks at the connections between clouds and global warming

The post Astronomers can play an important role in explaining the causes and consequences of climate change, says astrophysicist appeared first on Physics World.

]]>
Climate science and astronomy have much in common, and this has inspired the astrophysicist Travis Rector to call on astronomers to educate themselves, their students and the wider public about climate change. In this episode of the Physics World Weekly podcast, Rector explains why astronomers should listen to the concerns of the public when engaging about the science of global warming. And, he says the positive outlook of some of his students at the University of Alaska Anchorage makes him believe that a climate solution is possible.

Rector says that some astronomers are reluctant to talk to the public about climate change because they have not mastered the intricacies of the science. Indeed, one aspect of atmospheric physics that has challenged scientists is the role that clouds play in global warming. My second guest this week is the science journalist Michael Allen, who has written a feature article for Physics World called “Cloudy with a chance of warming: how physicists are studying the dynamical impact of clouds on climate change”. He talks about climate feedback mechanisms that involve clouds and how aerosols affect clouds and the climate.

The post Astronomers can play an important role in explaining the causes and consequences of climate change, says astrophysicist appeared first on Physics World.

]]>
Podcasts This podcast also looks at the connections between clouds and global warming https://physicsworld.com/wp-content/uploads/2024/11/28-11-2024-Travis-Rector.jpg
Optimization algorithm gives laser fusion a boost https://physicsworld.com/a/optimization-algorithm-gives-laser-fusion-a-boost/ Thu, 28 Nov 2024 11:26:14 +0000 https://physicsworld.com/?p=118486 Simulations suggest that iterative technique could increase the energy output of direct-drive inertial confinement fusion

The post Optimization algorithm gives laser fusion a boost appeared first on Physics World.

]]>
A new algorithmic technique could enhance the output of fusion reactors by smoothing out the laser pulses used to compress hydrogen to fusion densities. Developed by physicists at the University of Bordeaux, France, a simulated version of the new technique has already been applied to conditions at the US National Ignition Facility (NIF) and could also prove useful at other laser fusion experiments.

A major challenge in fusion energy is keeping the fuel – a mixture of the hydrogen isotopes deuterium and tritium – hot and dense enough for fusion reactions to occur. The two main approaches to doing this confine the fuel with strong magnetic fields or intense laser light and are known respectively as magnetic confinement fusion and inertial confinement fusion (ICF). In either case, when the pressure and temperature become high enough, the hydrogen nuclei fuse into helium. Since the energy released in this fusion reaction is, in principle, greater than the energy needed to get it going, fusion has long been viewed as a promising future energy source.

In 2022, scientists at NIF became the first to demonstrate “energy gain” from fusion, meaning that the fusion reactions produced more energy than was delivered to the fuel target via the facility’s system of super-intense lasers. The method they used was somewhat indirect. Instead of compressing the fuel itself, NIF’s lasers heated a gold container known as a hohlraum with the fuel capsule inside. The appeal of this so-called indirect-drive ICF is that it is less sensitive to inhomogeneities in the laser’s illumination. These inhomogeneities arise from interactions between the laser beams and the highly compressed plasma produced during fusion, and they are hard to get rid of.

In principle, though, direct-drive ICF is a stronger candidate for a fusion reactor, explains Duncan Barlow, a postdoctoral researcher at Bordeaux who led the latest research effort. This is because it couples more energy into the target, meaning it can deliver more fusion energy per unit of laser energy.

Reducing computing cost and saving time

To work out which laser configurations are the most homogeneous, researchers typically use iterative radiation-hydrodynamic simulations. These are time-consuming and computationally expensive (requiring around 1 million CPU hours per evaluation). “This expense means that only a few evaluations were run, and each step was best performed by an expert who could use her or his experience and the data obtained to pick the next configurations of beams to test the illumination uniformity,” Barlow says.

The new approach, he explains, relies on approximating some of the laser beam-plasma interactions by considering isotropic plasma profiles. This means that each iteration uses less than 1000 CPU, so thousands can be run for the cost of a single simulation using the old method. Barlow and his colleagues also created an automated method to quantify improvements and select the most promising step forward for the process.

The researchers demonstrated their technique using simulations of a spherical target at NIF. These simulations showed that the optimized configuration should produce convergent shocks in the fuel target, resulting in pressures three times higher (and densities almost two times higher) than in the original experiment. Although their simulations focused on NIF, they say it could also apply to other pellet geometries and other facilities.

Developing tools

The study builds on work by Barlow’s supervisor, Arnaud Colaïtis, who developed a tool for simulating laser-plasma interactions that incorporates a phenomenon known as cross-beam energy transfer (CBET) that contributes to inhomogeneities. Even with this and other such tools, however, Barlow explains that fusion scientists have long struggled to define optical illuminations when the system deviates from a simple mathematical description. “My supervisor recognized the need for a new solution, but it took us a year of further development to identify such a methodology,” he says. “Initially, we were hoping to apply neural networks – similar to image recognition – to speed up the technique, but we realized that this required prohibitively large training data.”

As well as working on this project, Barlow is also involved in a French project called Taranis that aims to use ICF to produce energy – an approach known as inertial fusion energy (IFE). “I am applying the methodology from my ICF work in a new way to ensure the robust, uniform drive of targets with the aim of creating a new IFE facility and eventually a power plant,” he tells Physics World.

A broader physics application, he adds, would be to incorporate more laser-plasma instabilities beyond CBET that are non-linear and normally too expensive to model accurately with radiation-hydrodynamic simulations. Some examples include simulated Brillouin scattering, stimulated Raman scattering and two-plasmon decay. “The method presented in our work, which is detailed in Physical Review Letters, is a great accelerated scheme for better evaluating these laser-plasma instabilities, their impact for illumination configurations and post-shot analysis,” he says.

The post Optimization algorithm gives laser fusion a boost appeared first on Physics World.

]]>
Research update Simulations suggest that iterative technique could increase the energy output of direct-drive inertial confinement fusion https://physicsworld.com/wp-content/uploads/2024/11/density_comparison_11ns_figure_for_editor_hi_res.jpg
Mark Thomson and Jung Cao: a changing of the guard in particle physics https://physicsworld.com/a/mark-thomson-and-jung-cao-a-changing-of-the-guard-in-particle-physics/ Wed, 27 Nov 2024 16:02:40 +0000 https://physicsworld.com/?p=118436 Two significant appointments mark a new era for global high-energy physics

The post Mark Thomson and Jung Cao: a changing of the guard in particle physics appeared first on Physics World.

]]>
All eyes were on the election of Donald Trump as US president earlier this month, whose win overshadowed two big appointments in physics. First, the particle physicist Jun Cao took over as director of China’s Institute of High Energy Physics (IHEP) in October, succeeding Yifang Wang, who had held the job since 2011.

Over the last decade, IHEP has emerged as an important force in particle physics, with plans to build a huge 100 km-circumference machine called the Circular Electron Positron Collider (CEPC). Acting as a “Higgs factory”, such a machine would be hundreds of times bigger and pricier than any project IHEP has ever attempted.

But China is serious about its intentions, aiming to present a full CEPC proposal to the Chinese government next year, with construction staring two years later and the facility opening in 2035. If the CEPC opens as planned in 2035, China could leapfrog the rest of the particle-physics community.

China’s intentions will be one pressing issue facing the British particle physicist Mark Thomson, 58, who was named as the 17th director-general at CERN earlier this month. He will take over in January 2026 from current CERN boss Fabiola Gianotti, who will finish her second term next year. Thomson will have a decisive hand in the question of what – and where – the next particle-physics facility should be.

CERN is currently backing the 91 km-circumference Future Circular Collider (FCC), several times bigger than the Large Hadron Collider (LHC). An electron–positron collider designed to study the Higgs boson in unprecedented detail, it could later be upgraded to a hadron collider, dubbed FCC-hh. But with Germany already objecting to the FCC’s steep £12bn price tag, Thomson will have a tough job eking extra cash for it from CERN member states. He’ll also be busy ensuring the upgraded LHC, known as the High-Luminosity LHC, is ready as planned by 2030.

I wouldn’t dare tell Thomson how to do his job, but Physics World did once ask previous CERN directors-general what skills are needed as lab boss. Crucial, they said, were people management, delegation, communication and the ability to speak multiple languages. Physical stamina was deemed a vital attribute too, with extensive international travel and late-night working required.

One former CERN director-general even cited the need to “eat two lunches the same day to satisfy important visitors”. Squeezing double dinners in will probably be the least of Thomson’s worries.

Fortuantely, I bumped into Thomson at an Institute of Physics meeting in London earlier this week, where he agreed to do an interview with Physics World. So you can be sure we’ll get Thomson put his aims and priorities as next CERN boss on record. Stay tuned…

The post Mark Thomson and Jung Cao: a changing of the guard in particle physics appeared first on Physics World.

]]>
Blog Two significant appointments mark a new era for global high-energy physics https://physicsworld.com/wp-content/uploads/2024/10/Fractal-image-of-particle-fission-1238252527-shutterstock_sakkmesterke.jpg newsletter
New imaging technique could change how we look at certain objects in space https://physicsworld.com/a/new-imaging-technique-could-change-how-we-look-at-certain-objects-in-space/ Wed, 27 Nov 2024 12:00:32 +0000 https://physicsworld.com/?p=118462 3D reconstruction technique for polarized radio sources could also reshape views of how radio galaxies form

The post New imaging technique could change how we look at certain objects in space appeared first on Physics World.

]]>
A new imaging technique that takes standard two-dimensional (2D) radio images and reconstructs them as three-dimensional (3D) ones could tell us more about structures such as the jet-like features streaming out of galactic black holes. According to the technique’s developers, it could even call into question physical models of how radio galaxies formed in the first place.

“We will now be able to obtain information about the 3D structures in polarized radio sources whereas currently we only see their 2D structures as they appear in the plane of the sky,” explains Lawrence Rudnick, an observational astrophysicist at the University of Minnesota, US, who led the study. “The analysis technique we have developed can be performed not only on the many new maps to be made with powerful telescopes such as the Square Kilometre Array and its precursors, but also from decades of polarized maps in the literature.”

Analysis of data from the MeerKAT radio telescope array

In their new work, Rudnick and colleagues in Australia, Mexico, the UK and the US studied polarized light data from the MeerKAT radio telescope array at the South African Radio Astronomy Observatory. They exploited an effect called Faraday rotation, which rotates the angle of polarized radiation as it travels through a magnetized ionized region. By measuring the amount of rotation for each pixel in an image, they can determine how much material that radiation passed through.

In the simplest case of a uniform medium, says Rudnick, this information tells us the relative distance between us and the emitting region for that pixel. “This allows us to reconstruct the 3D structure of the radiating plasma,” he explains.

An indication of the position of the emitting region

The new study builds on a previous effort that focused on a specific cluster of galaxies for which the researchers already had cubes of data representing its 2D appearance in the sky, plus a third axis given by the amount of Faraday rotation. In the latest work, they decided to look at this data in a new way, viewing the cubes from different angles.

“We realized that the third axis was actually giving us an indication of the position of the emitting region,” Rudnick says. “We therefore extended the technique to situations where we didn’t have cubes to start with, but could re-create them from a pair of 2D images.”

There is a problem, however, in that polarization angle can also rotate as the radiation travels through regions of space that are anything but uniform, including our own Milky Way galaxy and other intervening media. “In that case, the amount of radiation doesn’t tell us anything about the actual 3D structure of the emitting source,” Rudnick adds. “Separating out this information from the rest of the data is perhaps the most difficult aspect of our work.”

Shapes of structures are very different in 3D

Using this technique, Rudnick and colleagues were able determine the line-of-sight orientation of active galactic nuclei (AGN) jets as they are expelled from a massive black hole at the centre of the Fornax A galaxy. They were also able to observe how the materials in these jets interact with “cosmic winds” (essentially larger-scale versions of the magnetic solar wind streaming from our own Sun) and other space weather, and to analyse the structures of magnetic fields inside the jets from the M87 galaxy’s black hole.

The team found that the shapes of structures as inferred from 2D radio images were sometimes very different from those that appear in the 3D reconstructions. Rudnick notes that some of the mental “pictures” we have in our heads of the 3D structure of radio sources will likely turn out to be wrong after they are re-analysed using the new method. One good example in this study was a radio source that, in 2D, looks like a tangled string of filaments filling a large volume. When viewed in 3D, it turns out that these filamentary structures are in fact confined to a band on the surface of the source. “This could change the physical models of how radio galaxies are formed, basically how the jets from the black holes in their centres interact with the surrounding medium,” Rudnick tells Physics World.

The work is detailed in the Monthly Notices of the Royal Astronomical Society

The post New imaging technique could change how we look at certain objects in space appeared first on Physics World.

]]>
Research update 3D reconstruction technique for polarized radio sources could also reshape views of how radio galaxies form https://physicsworld.com/wp-content/uploads/2024/11/PKS-2014-55.jpg
Millions of smartphones monitor Earth’s ever-changing ionosphere https://physicsworld.com/a/millions-of-smartphones-monitor-ionosphere-dynamics/ Wed, 27 Nov 2024 08:35:49 +0000 https://physicsworld.com/?p=118439 Crowd-sourced data could improve global navigation satellite systems such as GPS

The post Millions of smartphones monitor Earth’s ever-changing ionosphere appeared first on Physics World.

]]>
A plan to use millions of smartphones to map out real-time variations in Earth’s ionosphere has been tested by researchers in the US. Developed by Brian Williams and colleagues at Google Research in California, the system could improve the accuracy of global navigation satellite systems (GNSSs) such as GPS and provide new insights into the ionosphere.

A GNSS uses a network of satellites to broadcast radio signals to ground-based receivers. Each receiver calculates its position based on the arrival times of signals from several satellites. These signals first pass through Earth’s ionosphere, which is a layer of weakly-ionized plasma about 50–1500 km above Earth’s surface. As a GNSS signal travels through the ionosphere, it interacts with free electrons and this slows down the signals slightly – an effect that depends on the frequency of the signal.

The problem is that the free electron density is not constant in either time or space. It can spike dramatically during solar storms and it can also be affected by geographical factors such as distance from the equator. The upshot is that variations in free electron density can lead to significant location errors if not accounted for properly.

To deal with this problem, navigation satellites send out two separate signals at different frequencies. These are received by dedicated monitoring stations on Earth’s surface and the differences between arrival times of the two frequencies is used create a real-time maps of the free electron density of the ionosphere. Such maps can then be used to correct location errors. However, these monitoring stations are expensive to install and tend to be concentrated in wealthier regions of the world. This results in large gaps in ionosphere maps.

Dual-frequency sensors

In their study, Williams’ team took advantage of the fact that many modern mobile phones have sensors that detect GNSS signals at two different frequencies. “Instead of thinking of the ionosphere as interfering with GPS positioning, we can flip this on its head and think of the GPS receiver as an instrument to measure the ionosphere,” Williams explains. “By combining the sensor measurements from millions of phones, we create a detailed view of the ionosphere that wouldn’t otherwise be possible.”

This is not a simple task, however, because individual smartphones are not designed for mapping the ionosphere. Their antennas are much less efficient than those of dedicated monitoring stations and the signals that smartphones receive are often distorted by surrounding buildings – and even users’ bodies. Also, these measurements are affected by the design of the phone and its GNSS hardware.

The big benefit of using smartphones is that their ownership is ubiquitous across the globe – including in developing regions such as India, Africa, and Southeast Asia. “In these parts of the world, there are still very few dedicated scientific monitoring stations that are being used by scientists to generate ionosphere maps,” says Williams. “Phone measurements provide a view of parts of the ionosphere that isn’t otherwise possible.”

The team’s proposal involves creating a worldwide network comprising millions of smartphones that will each carry out error correction measurements using the dual-frequency signals from GNSS satellites. Although each individual measurement will be relatively poor, the large number of measurements can be used to improve the overall accuracy of the map.

Simultaneous calibration

“By combining measurements from many phones, we can simultaneously calibrate the individual sensors and produce a map of ionosphere conditions, leading to improved location accuracy, and a better understanding of this important part of the Earth’s atmosphere,” Williams explains.

In their initial tests of the system, the researchers aggregated ionosphere measurements from millions of Android devices around the world. Crucially, there was no need to identify individual devices contributing to the study – ensuring the privacy and security of users.

Williams’ team was able to map a diverse array of variations in Earth’s ionosphere. These included plasma bubbles over India and South America; the effects of a small solar storm over North America; and a depletion in free electron density over Europe. These observations doubled the coverage are of existing maps and boosted resolution when compared to maps made using data from monitoring stations.

If such a smartphone-based network is rolled out, ionosphere-related location errors could be reduced by several metres – which would be a significant advantage to smartphone users.

“For example, devices could differentiate between a highway and a parallel rugged frontage road,” Williams predicts. “This could ensure that dispatchers send the appropriate first responders to the correct place and provide help more quickly.”

The research is described in Nature.

The post Millions of smartphones monitor Earth’s ever-changing ionosphere appeared first on Physics World.

]]>
Research update Crowd-sourced data could improve global navigation satellite systems such as GPS https://physicsworld.com/wp-content/uploads/2024/11/27-11-2024-smartphone-ionosphere-map.jpg newsletter1
Electromagnetic waves solve partial differential equations https://physicsworld.com/a/electromagnetic-waves-solve-partial-differential-equations/ Tue, 26 Nov 2024 16:00:56 +0000 https://physicsworld.com/?p=118434 New photonic technique could boost analogue alternatives to numerical methods

The post Electromagnetic waves solve partial differential equations appeared first on Physics World.

]]>
Waveguide-based structures can solve partial differential equations by mimicking elements in standard electronic circuits. This novel approach, developed by researchers at Newcastle University in the UK, could boost efforts to use analogue computers to investigate complex mathematical problems.

Many physical phenomena – including heat transfer, fluid flow and electromagnetic wave propagation, to name just three – can be described using partial differential equations (PDEs). Apart from a few simple cases, these equations are hard to solve analytically, and sometimes even impossible. Mathematicians have developed numerical techniques such as finite difference or finite-element methods to solve more complex PDEs. However, these numerical techniques require a lot of conventional computing power, even after using methods such as mesh refinement and parallelization to reduce calculation time.

Alternatives to numerical computing

To address this, researchers have been investigating alternatives to numerical computing. One possibility is electromagnetic (EM)-based analogue computing, where calculations are performed by controlling the propagation of EM signals through a materials-based processor. These processors are typically made up of optical elements such as Bragg gratings, diffractive networks and interferometers as well as optical metamaterials, and the systems that use them are termed “metatronic” by analogy with more familiar electronic circuit elements.

The advantage of such systems is that because they use EM waves, computing can take place literally at light speeds within the processors. Systems of this type have previously been used to solve ordinary differential equations, and to perform operations such as integration, differentiation and matrix multiplication.

Some mathematical operations can also be computed with electronic systems – for example, with grid-like arrays of “lumped” circuit elements (that is, components such as resistors, inductors and capacitors that produce a predictable output from a given input). Importantly, these grids can emulate the mesh elements that feature in the finite-element method of solving various types of PDEs numerically.

Recently, researchers demonstrated that this emulation principle also applies to photonic computing systems. They did this using the splitting and superposition of EM signals within an engineered network of dielectric waveguide junctions known as photonic Kirchhoff nodes. At these nodes, a combination of photonics structures, such as ring resonators and X-junctions, can similarly imitate lumped circuit elements.

Interconnected metatronic elements

In the latest work, Victor Pacheco-Peña of Newcastle’s School of Mathematics, Statistics and Physics and colleagues showed that such waveguide-based structures can be used to calculate solutions to PDEs that take the form of the Helmholtz equation ∇2f(x,y)+k2f(x,y)=0. This equation is used to model many physical processes, including the propagation, scattering and diffraction of light and sound as well as the interactions of light and sound with resonators.

Unlike in previous setups, however, Pacheco-Peña’s team exploited a grid-like network of parallel plate waveguides filled with dielectric materials. This structure behaves like a network of interconnected T-circuits, or metatronic elements, with the waveguide junctions acting as sampling points for the PDE solution, Pacheco-Peña explains. “By carefully manipulating the impedances of the metatronic circuits connecting these points, we can fully control the parameters of the PDE to be solved,” he says.

The researchers used this structure to solve various boundary value problems by inputting signals to the network edges. Such problems frequently crop up in situations where information from the edges of a structure is used to infer details of physical processes in other regions in it. For example, by measuring the electric potential at the edge of a semiconductor, one can calculate the distribution of electric potential near its centre.

Pacheco-Peña says the new technique can be applied to “open” boundary problems, such as calculating how light focuses and scatters, as well as “closed” ones, like sound waves reflecting within a room. However, he acknowledges that the method is not yet perfect because some undesired reflections at the boundary of the waveguide network distort the calculated PDE solution. “We have identified the origin of these reflections and proposed a method to reduce them,” he says.

In this work, which is detailed in Advanced Photonics Nexus, the researchers numerically simulated the PDE solving scheme at microwave frequencies. In the next stages of their work, they aim to extend their technique to higher frequency ranges. “Previous works have demonstrated metatronic elements working in these frequency ranges, so we believe this should be possible,” Pacheco-Peña tells Physics World. “This might also allow the waveguide-based structure to be integrated with silicon photonics or plasmonic devices.”

The post Electromagnetic waves solve partial differential equations appeared first on Physics World.

]]>
Research update New photonic technique could boost analogue alternatives to numerical methods https://physicsworld.com/wp-content/uploads/2024/11/Low-Res_Metatronic-920.jpg newsletter1
Institute of Physics says physics ‘deep tech’ missing out on £4.5bn of extra investment https://physicsworld.com/a/institute-of-physics-says-physics-deep-tech-missing-out-on-4-5bn-of-extra-investment/ Tue, 26 Nov 2024 13:40:22 +0000 https://physicsworld.com/?p=118425 Report by the Institute of Physics finds that venture-capital investors often struggle to invest in physics

The post Institute of Physics says physics ‘deep tech’ missing out on £4.5bn of extra investment appeared first on Physics World.

]]>
UK physics “deep tech” could be missing out on almost a £1bn of investment each year. That is according to a new report by the Institute of Physics (IOP), which publishes Physics World. It finds that venture capital investors often struggle to invest in high-innovation physics industries given the lack of a “one-size-fits-all” commercialisation pathway that is seen in others areas such as biotech.

According to the report, physics-based businesses add about £230bn to the UK economy each year and employ more than 2.7 million full-time employees. The UK also has one of the largest venture-capital markets in Europe and the highest rates of spin-out activity, especially in biotech.

Despite this, however, venture capital investment in “deep tech” physics – start-ups whose business model is based on high-tech innovation or significant scientific advances – remains low, attracting £7.4bn or 30% of UK science venture-capital investment.

To find out the reasons for this discrepancy, the IOP interviewed science-led businesses as well as 32 leading venture capital investors. Based on these discussions, it was found that many investors are confused about certain aspects of physics-based start-ups, finding that they often do not follow the familiar lifecycle of development as seen other areas like biotech.

Physics businesses are not, for example, always able to transition from being tech focussed to being product-led in the early stages of development, which prevents venture capitalists from committing large amounts of money. Another issue is that venture capitalists are less familiar with the technologies, timescales and “returns profile” of physics deep tech.

The IOP report estimates that if the full investment potential of physics deep tech is unlocked then it could result in an extra £4.5bn of additional funding over the next five years. In a foreword to the report, Hermann Hauser, the tech entrepreneur and founder of Acorn Computers, highlights “uncovered issues within the system that are holding back UK venture capital investment” into physics-based tech. “Physics deep-tech businesses generate huge value and have unique characteristics – so our national approach to finance for these businesses must be articulated in ways that recognise their needs,” writes Hauser.

Physics deep tech is central to the UK’s future prosperity

Tom Grinyer

At the same time, investors see a lot of opportunity in subjects such as quantum and semiconductor physics as well as with artificial intelligences and nuclear fusion. Jo Slota-Newson, a managing partner at Almanac Ventures who co-wrote the report, says there is “huge potential” for physics deep-tech businesses but “venture capital funds are being held back from raising and deploying capital to support this crucial sector”.

The IOP is now calling for a coordinated effort from government, investors as well as the business and science communities to develop “investment pathways” to address the issues raised in the report.  For example, the UK government should ensure grant and debt-financing options are available to support physics tech at “all stages of development”.

Slota-Newson, who has a background in science including a PhD in chemistry from the University of Cambridge, says that such moves should be “at the heart” of the UK’s government’s plans for growth. “Investors, innovators and government need to work together to deliver an environment where at every stage in their development there are opportunities for our deep tech entrepreneurs to access funding and support,” adds Slota-Newson. “If we achieve that we can build the science-driven, innovative economy, which will provide a sustainable future of growth, security and prosperity.”

The report also says that the IOP should play a role by continuing to highlight successful physics deep-tech businesses and to help them attract investment from both the UK and international venture-capital firms. Indeed, Tom Grinyer, group chief executive officer of the IOP, says that getting the model right could “supercharge the UK economy as a global leader in the technologies that will define the next industrial revolution”.

“Physics deep tech is central to the UK’s future prosperity — the growth industries of the future lean very heavily on physics and will help both generate economic growth and help move us to a lower carbon, more sustainable economy,” says Grinyer. “By leveraging government support, sharing information better and designing our financial support of this key sector in a more intelligent way we can unlock billions in extra investment.”

That view is backed by Hauser. “Increased investment, economic growth, and solutions to some of our biggest societal challenges [will move] us towards a better world for future generations,” he writes. “The prize is too big to miss”.

The post Institute of Physics says physics ‘deep tech’ missing out on £4.5bn of extra investment appeared first on Physics World.

]]>
News Report by the Institute of Physics finds that venture-capital investors often struggle to invest in physics https://physicsworld.com/wp-content/uploads/2024/11/crowdfunding-square-247151437-Shutterstock_Sentavio.jpg newsletter1
Triboelectric device reduces noise pollution https://physicsworld.com/a/triboelectric-device-reduces-noise-pollution/ Tue, 26 Nov 2024 12:00:47 +0000 https://physicsworld.com/?p=118427 A fibrous composite foam employs the triboelectric effect and in situ electrical energy dissipation to absorb low-frequency sound waves

The post Triboelectric device reduces noise pollution appeared first on Physics World.

]]>
Sound-absorbing mechanism of triboelectric fibrous composite foam

Noise pollution is becoming increasingly common in society today, impacting both humans and wildlife. While loud noises can be an inconvenience, if it’s something that happens regularly, it can have an adverse effect on human health that goes beyond a mild irritation.

As such noise pollution gets worse, researchers are working to mitigate its impact through new sound absorption materials. A team headed up the Agency for Science, Technology and Research (A*STAR) in Singapore has now developed a new approach to tackling the problem by absorbing sound waves using the triboelectric effect.

The World Health Organization has defined noise pollution as noise levels as above 65 dB, with one in five Europeans being regularly exposed to levels considered harmful to their health. “The adverse impacts of airborne noise on human health are growing concern, including disturbing sleep, elevating stress hormone levels, inciting inflammation and even increasing the risk of cardiovascular diseases,” says Kui Yao, senior author on the study.

Passive provides the best route

Mitigating noise requires conversion of the mechanical energy in acoustic waves into another form. For this, passive sound absorbers are a better option than active versions because they require less maintenance and consume no power (so don’t require a lot of extra components to work).

Previous efforts from Yao’s research group have shown that the piezoelectric effect – the process of creating a current when a material undergoes mechanical stress – can convert mechanical energy into electricity and could be used for passive sound absorption. However, the researchers postulated that the triboelectric effect – the process of electrical charge transfer when two surfaces contact each other – could be more effective for absorbing low-frequency noise.

The triboelectric effect is more commonly applied for harvesting mechanical energy, including acoustic energy. But unlike when used for energy harvesting, the use of the triboelectric effect in noise mitigation applications is not limited by the electronics around the material, which can cause impedance mismatching and electrical leakage. For sound absorbers, therefore, there’s potential to create a device with close to 100% efficient triboelectric conversion of energy.

Exploiting the triboelectric effect

Yao and colleagues developed a fibrous polypropylene/polyethylene terephthalate (PP/PET) composite foam that uses the triboelectric effect and in situ electrical energy dissipation to absorb low-frequency sound waves. In this foam, sound is converted into electricity through embedded electrically conductive elements, and this electricity is then dissipated into heat and removed from the material.

The energy dissipation mechanism requires triboelectric pairing materials with a large difference in charge affinity (the tendency to gain or lose charge from/to the other material). The larger the difference between the two fibre materials in the foam, the better the acoustic absorption performance due to the larger triboelectric effect.

To understand the effectiveness of different foam compositions for absorbing and converting sound waves, the researchers designed an acoustic impedance model to analyse the underlying sound absorption mechanisms. “Our theoretical analysis and experimental results show superior sound absorption performance of triboelectric energy dissipator-enabled composite foams over common acoustic absorbing products,” explains Yao.

The researchers first tested the fibrous PP/PET composite foam theoretically and experimentally and found that it had a high noise reduction coefficient (NRC) of 0.66 (over a broad low-frequency range). This translates to a 24.5% improvement in sound absorption performance compared with sound absorption foams that don’t utilize the triboelectric effect.

On the back of this result, the researchers validated their process further by testing other material combinations. This included: a PP/polyvinylidene fluoride (PVDF) foam with an NRC of 0.67 and 22.6% improvement in sound absorption performance; a glass wool/PVDF foam with an NRC of 0.71 and 50.6% improvement in sound absorption performance; and a polyurethane/PVDF foam with an NRC of 0.79 and 43.6% improvement in sound absorption performance.

All the improvements are based on a comparison against their non-triboelectric counterparts – where the sound absorption performance varies from composition to composition, hence the non-linear relationship between percentage values and NRC values. The foams also showed a sound absorption performance of 0.8 NRC at 800 Hz and around 1.00 NRC with sound waves above 1.4 kHz, compared with commercially available counterpart absorber materials.

When asked about the future of the sound absorbers, Yao tells Physics World: “We are continuing to improve the performance properties and seeking collaborations for adoption in practical applications”.

The research is published in Nature Communications.

The post Triboelectric device reduces noise pollution appeared first on Physics World.

]]>
Research update A fibrous composite foam employs the triboelectric effect and in situ electrical energy dissipation to absorb low-frequency sound waves https://physicsworld.com/wp-content/uploads/2024/11/26-11-24-noise-mitigation-fig1-featured.jpg newsletter1
Cloudy with a chance of warming: how physicists are studying the dynamical impact of clouds on climate change https://physicsworld.com/a/cloudy-with-a-chance-of-warming-how-physicists-are-studying-the-dynamical-impact-of-clouds-on-climate-change/ Tue, 26 Nov 2024 11:00:31 +0000 https://physicsworld.com/?p=118123 Michael Allen on how essential it is to understand how clouds respond to climate

The post Cloudy with a chance of warming: how physicists are studying the dynamical impact of clouds on climate change appeared first on Physics World.

]]>
For all of us concerned about climate change, 2023 was a grim year. According to the World Meteorological Organisation (WMO), it was the warmest year documented so far, with records broken – and in some cases smashed – for ocean heat, sea-level rise, Antarctic sea-ice loss and glacier retreat.

Capping off the warmest 10-year period on record, global average near-surface temperature hit 1.45 °C above pre-industrial levels. “Never have we been so close – albeit on a temporary basis at the moment – to the 1.5 °C lower limit of the Paris Agreement on climate change,” said WMO secretary-general Celeste Saulo in a statement earlier this year.

The heatwaves, floods, droughts and wildfires of 2023 are clear signs of the increasing dangers of the climate crisis. As we look to the future and wonder how much the world will warm, accurate climate models are vital.

For the physicists who build and run these models, one major challenge is figuring out how clouds are changing as the world warms, and how those changes will impact the climate system. According to the Intergovernmental Panel on Climate Change (IPCC), these feedbacks create the biggest uncertainties in predicting future climate change. 

Cloud cover, high and low

Clouds play a key role in the climate system, as they have a profound impact on the Earth’s radiation budget. That is the balance between the amount of energy coming in from solar radiation, and the amount of energy going back out to space, which is both the reflected (shortwave) and thermal (longwave) energy radiated from the Earth.

According to NASA, about 29% of solar energy that hits Earth’s atmosphere is reflected back into space, primarily by clouds (figure 1). And clouds also have a greenhouse effect, warming the planet by absorbing and trapping the outgoing thermal radiation.

1 Earth’s energy budget

Diagram of energy flowing into and out of Earth's atmosphere

How energy flows into and away from the Earth. Based on data from multiple sources including NASA’s CERES satellite instrument, which measures reflected solar and emitted infrared radiation fluxes. All values are fluxes in watts per square metre and are average values based on 10 years of data. First published in 2014.

“Even a subtle change in global cloud properties could be enough to have a noticeable effect on the global energy budget and therefore the amount of warming,” explains climate scientist Paulo Ceppi of Imperial College London, who is an expert on the impact of clouds on global climate.

A key factor in this dynamic is “cloud fraction” – a measurement that climate scientists use to determine the percentage of the Earth covered by clouds at a given time. More specifically, it’s the portion of the Earth’s surface covered by cloud, relative to the portion that is uncovered. Cloud fraction is determined via satellite imagery and is the portion of each pixel (1-km-pixel resolution cloud mask) in an image that is covered by clouds (figure 2).

Apart from the amount of cover, what also matter are the altitude of clouds and their optical thickness. Higher, cooler clouds absorb more thermal energy originating from the Earth’s surface, and therefore have a greater greenhouse warming effect than low clouds. They also tend to be thinner, so they let more sunlight through and overall have a net warming effect. Low clouds, on the other hand, have a weak greenhouse effect, but tend to be thicker and reflect more solar radiation. They generally have a net cooling effect.

2 Cloud fraction

These maps show what fraction of an area was cloudy on average each month, according to measurements collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. MODIS collects information in gridded boxes, or pixels. Cloud fraction is the portion of each pixel that is covered by clouds. Colours range from blue (no clouds) to white (totally cloudy).

The band of persistent clouds around the equator is the Intertropical Convergence Zone – where the easterly trade winds in the Northern and Southern Hemispheres meet, pushing warm, moist air high into the atmosphere. The air expands and cools, and the water vapour condenses into clouds and rain. The cloud band shifts slightly north and south of the equator with the seasons. In tropical countries, this shifting of the zone is what causes rainy and dry seasons.

Video and data courtesy: NASA Earth Observations

As the climate warms, cloud properties are changing, altering the radiation budget and influencing the amount of warming. Indeed, there are two key changes: rising cloud tops and a reduction in low cloud amount.

The most understood effect, Ceppi explains, is that as global temperatures increase, clouds rise higher into the troposphere, which is the lowermost atmospheric layer. This is because as the troposphere warms it expands, increasing to greater altitudes. Over the last 40 years the top of the troposphere, known as the tropopause, has risen by about 50 metres per decade (Sci. Adv. 10.1126/sciadv.abi8065).

“You are left with clouds that rise higher up on average, so have a greater greenhouse warming effect,” Ceppi says. He adds that modelling data and satellite observations support the idea that cloud tops are rising.

Conversely, coverage of low clouds, which reflect sunlight and cool the Earth’s surface, is decreasing with warming. This reduction is mainly in marine low clouds over tropical and subtropical regions. “We are talking a few per cent, so not something that you would necessarily notice with your bare eyes, but it’s enough to have an effect of amplifying global warming,” he adds.

These changes in low clouds are partly responsible for some of the extreme ocean heatwaves seen in recent years (figure 3). While the mechanisms behind these events are complex, one known driver is this reduction in low cloud cover, which allows more solar radiation to hit the ocean (Science 325 460).

“It’s cloud feedback on a more local scale,” Ceppi says. “So, the ocean surface warms locally and that prompts low cloud dissipation, which leads to more solar radiation being absorbed at the surface, which prompts further warming and therefore amplifies and sustains those events.”

3 Ocean heat

Heat map of the Earth

Sea surface temperature anomaly (°C) for the month of June 2023, relative to the 1991–2020 reference period. The global ocean experienced an average daily marine heatwave coverage of 32%, well above the previous record of 23% in 2016. At the end of 2023, most of the global ocean between 20° S and 20° N had been in heatwave conditions since early November.

Despite these insights, several questions remain unanswered. For example, Ceppi explains that while we know that low cloud changes will amplify warming, the strength of these effects needs further investigation, to reduce the uncertainty range.

Also, as high clouds move higher, there may be other important changes, such as shifts in optical thickness, which is a measure of how much light is scattered or absorbed by cloud droplets, instead of passing through the atmosphere. “We are a little less certain about what else happens to [high clouds],” says Ceppi.

Diurnal changes

It’s not just the spatial distribution of clouds that impacts climate. Recent research has found an increasing asymmetry in cloud-cover changes between day and night. Simply put, daytime clouds tend to cool Earth’s surface by reflecting solar radiation, while at night clouds trap thermal radiation and have a warming effect. This shift in diurnal distribution could create a feedback loop that amplifies global warming.

The new study was led by theoretical meteorologist Johannes Quaas at Leipzig University, together with Hao Luo and Yong Han from Sun Yat-sen University in China, who found that as the climate warms, cloud cover – especially in the lower atmosphere – decreases more during the day than at night (Sci. Adv. 10.1126/sciadv.ado5179).

By analysing satellite observations and data from the sixth phase of the Coupled Model Intercomparison Project (CMIP6) – which incorporates historical data collected between 1970 and 2014 as well as projections up to the year 2100 – the researchers concluded that this diurnal asymmetry is largely due to rising concentrations of greenhouse gases that make the lower troposphere more stable, which in turn increases the overall heating.

Fewer clouds form during the day, thereby reducing the amount of shortwave radiation that is reflected away. Night-time clouds are more stable, which in turn increases the longwave greenhouse effect. “Our study shows that this asymmetry causes a positive feedback loop that amplifies global warming,” says Quaas. This growing asymmetry is mainly driven by a daytime increase in turbulence in the lower troposphere as the climate warms, meaning that clouds are less likely to form and remain stable during the day.

Mixed-phase clouds

Climate models are affected by more than just the distribution of clouds in space. What also matters is the distribution of liquid water and ice within clouds. In fact, researchers have found that the way in which models simulate this effect influences their predictions of warming in response to greenhouse gas emissions.

So-called “mixed-phase” clouds are those that contain water vapour, ice particles and supercooled liquid droplets, and exist in a three-phase colloidal system. Such clouds are ubiquitous in the troposphere. These clouds are found at all latitudes from the polar regions to the tropics and they play an important role in the climate system.

As the atmosphere warms, mixed-phase clouds tend to shift from ice to liquid water. This transition makes these clouds more reflective, enhancing their cooling effect on the Earth’s surface – a negative feedback that dampens global warming.

In 2016 Trude Storelvmo, an atmospheric scientist at the University of Oslo in Norway, and her colleagues made an important discovery: many climate models overestimate this negative feedback (Geophys. Res. Lett. 10.1029/2023GL105053). Indeed, the models often simulate clouds with too much ice and not enough liquid water. This error exaggerates the cooling effect from the phase transition. Essentially, the clouds in these simulations have too much ice to lose, causing the models to overestimate the increase in their reflectiveness as they warm.

One problem is that these models oversimplify cloud structure, failing to capture the true heterogeneity of mixed-phase clouds. Satellite, balloon and aircraft observations reveal that these clouds are not uniformly mixed, either vertically or horizontally. Instead, they contain pockets of ice and liquid water, leading to complex interactions that are inadequately represented in the simulations. As a result, they overestimate ice formation and underestimate liquid cloud development.

Storelvmo’s work also found that initially, increased cloud reflectivity has a strong effect that helps mitigate global warming. But as the atmosphere continues to warm, the increase in reflectiveness slows. This shift is intuitive: as the clouds become more liquid, they have less ice to lose. At some point they become predominantly liquid, eliminating the phase transition. The clouds cannot become anymore liquid – and thus reflective – and warming accelerates.

Liquid cloud tops

Earlier this year, Storelvmo and colleagues carried out a new study, using satellite data to study the vertical composition of mixed-phase clouds. The team discovered that globally, these clouds are more liquid at the top (Commun. Earth Environ. 5 390).

Storelvmo explains that this top cloud layer is important as “it is the first part of the cloud that radiation interacts with”. When the researchers adjusted climate models to correctly capture this vertical composition, it had a significant impact, triggering an additional degree of warming in a “high-carbon emissions” scenario by the end of this century, compared with current climate projections.

“It is not inconceivable that we will reach temperatures where most of [the negative feedback from clouds] is lost, with current CO2 emissions,” says Storelvmo. The point at which this happens is unclear, but is something that scientists are actively working on.

The study also revealed that while changes to mixed-phased clouds in the northern mid-to-high latitudes mainly influence the climate in the northern hemisphere, changes to clouds in the same southern latitudes have global implications.

“When we modify clouds in the southern extratropic that’s communicated all the way to the Arctic – it’s actually influencing warming in the arctic,” says Storelvmo. The reasons for this are not fully understood, but Storelvmo says other studies have seen this effect too.

“It’s an open and active area of research, but it seems that the atmospheric circulation helps pass on perturbations from the Southern Ocean much more efficiently than northern perturbations,” she explains.

The aerosol problem

As well as generating the greenhouse gases that drive the climate crisis, fossil fuel burning also produces aerosols. The resulting aerosol pollution is a huge public health issue. The recent “State of Global Air Report 2024” from the Health Effects Institute found that globally eight million people died because of air pollution in 2021. Dirty air is also now the second-leading cause of death in children under five, after malnutrition.

To tackle these health implications, many countries and organizations have introduced air-quality clean-up policies. But cleaning up air pollution has an unfortunate side-effect: it exacerbates the climate crisis. Indeed, a recent study has even warned that aggressive aerosol mitigation policies will hinder our chances of keeping global warming below 2 °C (Earth’s Future 10.1029/2023EF004233).

Smog in Lahore

Jim Haywood, an atmospheric scientist at the University of Exeter, says that aerosols have two major cooling impacts on climate. The first is through the direct scattering of sunlight back out to space. The second is via the changes they induce in clouds.

When you add small pollution particles to clouds, explains Haywood, it creates “clouds that are made up of a larger number of small cloud droplets and those clouds are more reflective”. The shrinking in cloud droplet size can also reduce precipitation – adding more liquid water in clouds. The clouds therefore last longer, cover a greater area and become more reflective.

But if atmospheric aerosol concentrations are reduced, so too are these reflective, planet-cooling effects. “This masking effect by the aerosols is taken out and we unveil more and more of the full greenhouse warming,” says Quaas.

A good example of this is recent policy aimed at cleaning up shipping fuels by lowering sulphur concentrations. At the start of 2020 the International Maritime Organisation introduced regulations that slashed the limit on sulphur content in fuels from 3.5% to 0.5%.

Haywood explains that this has reduced the additional reflectivity that this pollution created in clouds and caused a sharp increase in global warming rates. “We’ve done some simulations with climate models, and they seem to be suggestive of at least three to four years acceleration of global warming,” he adds.

Overall models suggest that if we remove all the world’s polluting aerosols, we can expect to see around 0.4 °C of additional warming, says Quaas. He acknowledges that we must improve air quality “because we cannot just accept people dying and ecosystems deteriorating”.  By doing so, we must also be prepared for this additional warming. But more work is needed, “because the current uncertainty is too large”, he continues. Uncertainty in the figures is around 50%, according to Quaas, which means that slashing aerosol pollution could cause anywhere from 0.2 to 0.6 °C of additional warming.

Haywood says that while current models do a relatively good job of representing how aerosols reduce cloud droplet size and increase cloud brightness, they do a poor job of showing how aerosols effect cloud fraction.

Cloud manipulation

The fact that aerosols cool the planet by brightening clouds opens an obvious question: could we use aerosols to deliberately manipulate cloud properties to mitigate climate change?

“There are more recent proposals to combat the impacts, or the worst of the impacts of global warming, through either stratospheric aerosol injection or marine cloud brightening, but they are really in their infancy and need to be understood an awful lot better before any kind of deployment can even be considered,” says Haywood. “You need to know not just how the aerosols might interact with clouds, but also how the cloud then interacts with the climate system and the [atmospheric] teleconnections that changing cloud properties can induce.”

Haywood recently co-authored a position paper, together with a group of atmospheric scientists in the US and Europe, arguing that a programme of physical science research is needed to evaluate the viability and risks of marine cloud brightening (Sci. Adv. 10 eadi8594).

A proposed form of solar radiation management, known as marine cloud brightening, would involve injecting aerosol particles into low-level, liquid marine clouds – mainly those covering large areas of subtropical oceans – to increase their reflectiveness (figure 4).

Most marine cloud-brightening proposals suggest using saltwater spray as the aerosol. In theory, when sprayed into the air the saltwater would evaporate to produce fine haze particles, which would then be transported by air currents into cloud. Once in the clouds, these particles would increase the number of cloud droplets, and so increase cloud brightness.

4 Marine cloud brightening

Diagram of cloud brightening

In this proposal, ship-based generators would ingest seawater and produce fine aerosol haze droplets with an equivalent dry diameter of approximately 50 nm. In optimal conditions, many of these haze droplets would be lofted into the cloud by updrafts, where they would modify cloud microphysics processes, such as increasing droplet number concentrations, suppressing rain formation, and extending the coverage and lifetime of the clouds. At the cloud scale, the degree of cloud brightening and surface cooling would depend on how effectively the droplet number concentrations can be increased, droplet sizes reduced, and cloud amount and lifetime increased.

Graham Feingold, research scientist at NOAA’s Chemical Laboratory in Boulder, Colorado, says that there are still unanswered questions on everything from particle generation to their interactions with clouds, and the overall impact on cloud brightness and atmospheric systems.

Feingold, an author on the position paper, says that a key challenge lies in predicting how additional particles will affect cloud properties. For instance, while more haze droplets might theoretically brighten clouds, it could also lead to unintended effects like increased evaporation or rain, which could even reduce cloud coverage.

Another difficult challenge is the inconstancy of cloud response to aerosols. “Ship traffic is really regular,” explains Feingold, “but if you look at satellite imagery on a daily basis in a certain area, sometimes you see really clear, beautiful ship tracks and other times you don’t – and the ship traffic hasn’t changed but the meteorology has.” This variability depends on cloud susceptibility to aerosols, which is influenced by meteorological conditions.

And even if cloud systems that respond well to marine cloud brightening are identified, it would not be sensible to repeatedly target them. “Seeding the same area persistently could have some really serious knock-on effects on regional temperature and rainfall,” says Feingold.

Essentially, aerosol injections into the same area day after day would create localized radiative cooling, which would impact regional climate patterns. This highlights the ethical concerns with cloud brightening, as such effects could benefit some regions while negatively impacting others.

Addressing many of these questions requires significant advances in current climate models, so that the entire process – from the effects of aerosols on cloud microphysics through to the larger impact on clouds and then global climate circulations – can be accurately simulated. Bridging these knowledge gaps will require controlled field experiments, such as aerosol releases from point sources in areas of interest, while taking observational data using tools like drones, airplanes and satellites. Such experiments would help scientists get a “handle on this connection between emitted particles and brightening”, says Feingold.

But physicists can only do so much. “We are not trying to push marine cloud brightening, we are trying to understand it,” says Feingold. He argues that a parallel effort to discuss the governance of marine cloud brightening is also needed.

In recent years, much progress has been made in determining the impact of clouds, when it comes to regulating our planet’s climate, and their importance in climate modelling. “While major advances in the understanding of cloud processes have increased the level of confidence and decreased the uncertainty range for the cloud feedback by about 50% compared to AR5 [IPCC report], clouds remain the largest contribution to overall uncertainty in climate feedbacks (high confidence),” states the IPCC’s latest Assessment Report (AR6), published in 2021. Physicists and atmospheric scientists will continue to study how cloud systems will respond to our ever-changing climate and planet, but ultimately, it is wider society that needs to decide the way forward.

The post Cloudy with a chance of warming: how physicists are studying the dynamical impact of clouds on climate change appeared first on Physics World.

]]>
Feature Michael Allen on how essential it is to understand how clouds respond to climate https://physicsworld.com/wp-content/uploads/2024/11/2024-11-Allen-frontis-ISS034E016601.jpg newsletter
Cascaded crystals move towards ultralow-dose X-ray imaging https://physicsworld.com/a/cascaded-crystals-move-towards-ultralow-dose-x-ray-imaging/ Mon, 25 Nov 2024 13:30:14 +0000 https://physicsworld.com/?p=118379 Interconnected single-crystal devices significantly reduce X-ray detection thresholds while increasing spatial resolution

The post Cascaded crystals move towards ultralow-dose X-ray imaging appeared first on Physics World.

]]>
Single-crystal and cascade-connected devices under X-ray irradiation

X-ray imaging plays an indispensable role in diagnosing and staging disease. Nevertheless, exposure to high doses of X-rays has potential for harm, and much effort is focused towards reducing radiation exposure while maintaining diagnostic function. With this aim, researchers at the King Abdullah University of Science and Technology (KAUST) have shown how interconnecting single-crystal devices can create an X-ray detector with an ultralow detection threshold.

The team created devices using lab-grown single crystals of methylammonium lead bromide (MAPbBr3), a perovskite material that exhibits considerable stability, minimal ion migration and a high X-ray absorption cross-section – making it ideal for X-ray detection. To improve performance further, they used cascade engineering to connect two or more crystals together in series, reporting their findings in ACS Central Science.

X-rays incident upon a semiconductor crystal detector generate a photocurrent via the creation of electron–hole pairs. When exposed to the same X-ray dose, cascade-connected crystals should exhibit the same photocurrent as a single-crystal device (as they generate equal net concentrations of electron–hole pairs). The cascade configuration, however, has a higher resistivity and should thus have a much lower dark current, improving the signal-to-noise ratio and enhancing the detection performance of the cascade device.

To test this premise, senior author Omar Mohammed and colleagues grew single crystals of MAPbBr3. They first selected four identical crystals to evaluate (SC1, SC2, SC3 and SC4), each 3 x 3 mm in area and approximately 2 mm thick. Measuring various optical and electrical properties revealed high consistency across the four samples.

“The synthesis process allows for reproducible production of MAPbBr3 single crystals, underscoring their strong potential for commercial applications,” says Mohammed.

Optimizing detector performance

Mohammed and colleagues fabricated X-ray detectors containing a single MAPbBr3 perovskite crystal (SC1) and detectors with two, three and four crystals connected in series (SC1−2, SC1−3 and SC1−4). To compare the dark currents of the devices they irradiated each one with X-rays under a constant 2 V bias voltage. The cascade-connected SC1–2 exhibited a dark current of 7.04 nA, roughly half that generated by SC1 (13.4 nA). SC1–3 and SC1–4 reduced the dark current further, to 4 and 3 nA, respectively.

The researchers also measured the dark current for the four devices as the bias voltage changed from 0 to -10 V. They found that SC1 reached the highest dark current of 547 nA, while SC1–2, SC1–3 and SC1–4 showed progressively decreasing dark currents of 134, 90 and 50 nA, respectively. “These findings highlight the effectiveness of cascade engineering in reducing dark current levels,” Mohammed notes.

Next, the team assessed the current stability of the devices under continuous X-ray irradiation for 450 s. SC1–2 exhibited a stable current response, with a skewness value of just 0.09, while SC1, SC1–3 and SC1–4 had larger skewness values of 0.75, 0.45 and 0.76, respectively.

The researchers point out that while connecting more single crystals in series reduced the dark current, increasing the number of connections also lowered the stability of the device. The two-crystal SC1–2 represents the optimal balance.

Low-dose imaging

One key component required for low-dose X-ray imaging is a low detection threshold. The conventional single-crystal SC1 showed a detection limit of 590 nGy/s under a 2 V bias. SC1–2 decreased this limit to 100 nGy/s – the lowest of all four devices and surpassing the existing record achieved by MAPbBr3 perovskite devices under near-identical conditions.

Spatial resolution is another important consideration. To assess this, the researchers estimated the modulation transfer function (the level of original contrast maintained by the detector) for each of the four devices. They found that SC1–2 exhibited the best spatial resolution of 8.5 line pairs/mm, compared with 5.6, 5.4 and 4 line pairs/mm for SC1, SC1–3 and SC1–4, respectively.

X-ray images of a key and a raspberry with a needle

Finally, the researchers performed low-dose X-ray imaging experiments using the four devices, first imaging a key at a dose rate of 3.1 μGy/s. SC1 exhibited an unclear image due to the unstable current affecting its resolution. Devices SC1–2 to SC1–4 produced clearer images of the key, with SC1–2 showing the best image contrast.

They also imaged a USB port at a dose rate of 2.3 μGy/s, a metal needle piercing a raspberry at 1.9 μGy/s and an earring at 750 nGy/s. In all cases, SC1–2 exhibited the highest quality image.

The researchers conclude that the cascade-engineered configuration represents a significant shift in low-dose X-ray detection, with potential to advance applications that require minimal radiation exposure combined with excellent image quality. They also note that the approach works with different materials, demonstrating X-ray detection using cascaded cadmium telluride (CdTe) single crystals.

Mohammed says that the team is now investigating the application of the cascade structure in other perovskite single crystals, such as FAPbI3 and MAPbI3, with the goal of reducing their detection limits. “Moreover, efforts are underway to enhance the packaging of MAPbBr3 cascade single crystals to facilitate their use in dosimeter detection for real-world applications,” he tells Physics World.

The post Cascaded crystals move towards ultralow-dose X-ray imaging appeared first on Physics World.

]]>
Research update Interconnected single-crystal devices significantly reduce X-ray detection thresholds while increasing spatial resolution https://physicsworld.com/wp-content/uploads/2024/11/25-11-24-low-dose-X-ray-images-fig4.jpg newsletter1
Why academia should be funded by governments, not students https://physicsworld.com/a/why-academia-should-be-funded-by-governments-not-students/ Mon, 25 Nov 2024 11:00:56 +0000 https://physicsworld.com/?p=117958 Jonte Hance says that the increase in tuition fees will only delay the inevitable fall of the UK academic system

The post Why academia should be funded by governments, not students appeared first on Physics World.

]]>
In an e-mail to staff in September 2024, Christopher Day, the vice-chancellor of Newcastle University in the UK, announced a £35m shortfall in its finances for 2024. Unfortunately, Newcastle is not alone in facing financial difficulties. The problem is largely due to UK universities obtaining much of their funding by charging international students exorbitant tuition fees of tens of thousands of pounds per year. In 2022 international students made up 26% of the total student population. But with the number of international students coming to the UK recently falling and tuition fees for domestic students having increased by less than 6% over the last decade, the income from students is no longer enough to keep our universities afloat.

Both Day and Universities UK (UUK) – the advocacy organization for universities in the UK – pushed for the UK government to allow universities to increase fees for both international and domestic students. They suggested raising the cap on tuition fees for UK students to £13,000 per year, much more than the new cap that was set earlier this month at £9535. Increasing tuition fees further, however, would be a disaster for our education system.

The introduction of student fees was sold to universities in the late 1990s as a way to get more money, and sold to the wider public as a way to allow “market fairness” to improve the quality of education given by universities. In truth, it was never about either of these things.

Tuition fees were about making sure that the UK government would not have to worry about universities pressuring them to increase funding. Universities instead would have to rationalize higher fees with the students themselves. But it is far easier to argue that “we need more money from you, the government, to continue the social good we do” than it is to say “we need more money from you, the students, to keep giving you the same piece of paper”.

Degree-level education in the UK is now treated as a private commodity, to be sold by universities and bought by students, with domestic students taking out a loan from the government that they pay back once they earn above a certain threshold. But this implies that it is only students who profit from the education and that the only benefit for them of a degree is a high-paid job.

Education ends up reduced to an initial financial outlay for a potential future financial gain, with employers looking for job applicants with a degree regardless of what it is in. We might as well just sell students pieces of paper boasting about how much money they have “invested” in themselves.

Yet going to university brings so much more to students than just a boost to their future earnings. Just look, for example, at the high student satisfaction for arts and humanities degrees compared to business or engineering degrees. University education also brings huge social, cultural and economic benefits to the wider community at a local, regional and national level.

UUK estimates that for every £1 of public money invested in the higher-education sector across the UK, £14 is put back into the economy – totalling £265bn per year. Few other areas of government spending give such large economic returns for the UK. No wonder, then, that other countries continue to fund their universities centrally through taxes rather than fees. (Countries such as Germany that do levy fees charge only a nominal amount, as the UK once did.)

Some might say that the public should not pay for students to go to university. But that argument doesn’t stack up. We all pay for roads, schools and hospitals from general taxation whether we use those services or not, so the same should apply for university education. Students from Scotland who study in the country have their fees paid by the state, for example.

Up in arms

Thankfully, some subsidy still remains in the system, mainly for technical degrees such as the sciences and medicine. These courses on average cost more to run than humanities and social sciences courses due to the cost of practical work and equipment. However, as budgets tighten, even this is being threatened.

In 2004 Newcastle closed its physics degree programme due to its costs. While the university soon reversed the mistake, it lives long in the memories of those who today still talk about the incalculable damage this and similar cuts did to UK physics. Indeed, I worry whether this renewed focus on profitability, which over the last few years has led to many humanities programmes and departments closing at UK universities, could again lead to closures in the sciences. Without additional funding, it seems inevitable.

University leaders should have been up in arms when student fees were introduced in the early 2000s. Instead, most went along with them, and are now reaping what they sowed. University vice-chancellors shouldn’t be asking the government to allow universities to charge ever higher fees – they should be telling the government that we need more money to keep doing the good we do for this country. They should not view universities as private businesses and instead lobby the government to reinstate a no-fee system and to support universities again as being social institutions.

If this doesn’t happen, then the UK academic system will fall. Even if we do manage to somehow cut costs in the short term by around £35m per university, it will only prolong the inevitable. I hope vice chancellors and the UK government wake up to this fact before it is too late.

The post Why academia should be funded by governments, not students appeared first on Physics World.

]]>
Opinion and reviews Jonte Hance says that the increase in tuition fees will only delay the inevitable fall of the UK academic system https://physicsworld.com/wp-content/uploads/2024/11/2024-11-Forum-university-exam-182059956-iStock_skynesher.jpg newsletter
Ultrafast electron entanglement could be studied using helium photoemission https://physicsworld.com/a/ultrafast-electron-entanglement-could-be-studied-using-helium-photoemission/ Sat, 23 Nov 2024 13:28:23 +0000 https://physicsworld.com/?p=118389 Calculations suggest sub-femtosecond resolution is possible

The post Ultrafast electron entanglement could be studied using helium photoemission appeared first on Physics World.

]]>
The effect of quantum entanglement on the emission time of photoelectrons has been calculated by physicists in China and Austria. Their result includes several counter-intuitive predictions that could be testable with improved free-electron lasers.

The photoelectric effect involves quantum particles of light (photons) interacting with electrons in atoms, molecules and solids. This can result in the emission of an electron (called a photoelectron), but only if the photon energy is greater than the binding energy of the electron.

“Typically when people calculate the photoelectric effect they assume it’s a very weak perturbation on an otherwise inert atom or solid surface and most of the time does not suffer anything from these other atoms or photons coming in,” explains Wei-Chao Jiang of Shenzhen University in China. In very intense radiation fields, however, the atom may simultaneously absorb multiple photons, and these can give rise to multiple emission pathways.

Jiang and colleagues have done a theoretical study of the ionization of a helium atom from its ground state by intense pulses of extreme ultraviolet (XUV) light. At sufficient photon intensities, there are two possible pathways by which a photoelectron can be produced. In the first, called direct single ionization, the photon in the ground state simply absorbs an electron and escapes the potential well. The second is a two-photon pathway called excitation ionization, in which both of the helium electrons absorb a photon from the same light pulse. One of them subsequently escapes, while the other remains in a higher energy level in the residual ion.

Distinct pathways

The two photoemission pathways are distinct, so making a measurement of the emitted electron reveals information about the state of the bound electron that was left behind. The light pulse therefore creates an entangled state in which the two electrons are described by the same quantum wavefunction. To better understand the system, the researchers modelled the emission time for an electron undergoing excitation ionization relative to an electron undergoing direct single ionization.

“The naïve expectation is that, if I have a process that takes two photons, that process will take longer than one where one photon does the whole thing,” says team member Joachim Burgdörfer of the Vienna University of Technology. What the researchers calculated, however, is that photoelectrons emitted by excitation ionization were most likely to be detected about 200 as earlier than photons detected by direct single ionization. This can be explained semi-classically by assuming that the photoionization event must precede the creation of the  helium ion (He+) for the second excitation step to occur. Excitation ionization therefore requires earlier photoemission.

The researchers believe that, in principle, it should be possible to test their model using attosecond streaking or RABBITT (reconstruction of attosecond beating by interference of two-photon transitions). These are special types of pump-probe spectroscopy that can observe interactions at ultrashort timescales. “Naïve thinking would say that, using a 500 as pulse as a pump and a 10 fs pulse as a probe, there is no way you can get time resolution down to say, 10 as,” says Burgdörfer. “This is where recently developed techniques such as streaking or RABBITT  come in. You no longer try to keep the pump and probe pulses apart, instead you want overlap between the pump and probe and you extract the time information from the phase information.”

Simulated streaking

The team also did numerical simulations of the expected streaking patterns at one energy and found that they were consistent with an analytical calculation based on their intuitive picture. “Within a theory paper, we can only check for mutual consistency,” says Burgdörfer.

The principal hurdle to actual experiments lies in generating the required XUV pulses. Pulses from high harmonic generation may not be sufficiently strong to excite the two-photon emission. Free electron laser pulses can be extremely high powered, but are prone to phase noise. However, the researchers note that entanglement between a photoelectron and an ion has been achieved recently at the FERMI free electron laser facility in Italy.

“Testing these predictions employing experimentally realizable pulse shapes should certainly be the next important step.” Burgdörfer says. Beyond this, the researchers intend to study entanglement in more complex systems such as multi-electron atoms or simple molecules.

Paul Corkum at Canada’s University of Ottawa is intrigued by the research. “If all we’re going to do with attosecond science is measure single electron processes, probably we understood them before, and it would be disappointing if we didn’t do something more,” he says. “It would be nice to learn about atoms, and this is beginning to go into an atom or at least its theory thereof.” He cautions, however, that “If you want to do an experiment this way, it is hard.”

The research is described in Physical Review Letters.  

The post Ultrafast electron entanglement could be studied using helium photoemission appeared first on Physics World.

]]>
Research update Calculations suggest sub-femtosecond resolution is possible https://physicsworld.com/wp-content/uploads/2024/11/23-11-24-Time-for-entanglement.jpg
Noodles of fun as UK researchers create the world’s thinnest spaghetti https://physicsworld.com/a/noodles-of-fun-as-uk-researchers-create-the-worlds-thinnest-spaghetti/ Fri, 22 Nov 2024 15:30:54 +0000 https://physicsworld.com/?p=118382 At 372 nanometres, the “nanopasta” is not for consumption

The post Noodles of fun as UK researchers create the world’s thinnest spaghetti appeared first on Physics World.

]]>
While spaghetti might have a diameter of a couple of millimetres and capelli d’angelo (angel hair) is around 0.8 mm, the thinnest known pasta to date is thought to be su filindeu (threads of God), which is made by hand in Sardinia, Italy, and is about 0.4 mm in diameter.

That is, however, until researchers in the UK created spaghetti coming in at a mindboggling 372 nanometres (0.000372 mm) across (Nanoscale Adv. 10.1039/D4NA00601A).

About 200 times thinner than a human hair, the “nanopasta” is made using a technique called electrospinning, in which the threads of flour and liquid were pulled through the tip of a needle by an electric charge.

“To make spaghetti, you push a mixture of water and flour through metal holes,” notes Adam Clancy from University College London (UCL). “In our study, we did the same except we pulled our flour mixture through with an electrical charge. It’s literally spaghetti but much smaller.”

While each individual strand is too thin to see directly with the human eye or with a visible light microscope, the team used the threads to form a mat of nanofibres about two centimetres across, creating in effect a mini lasagne sheet.

The researchers are now investigating how the starch-based nanofibres could be used for medical purposes such as wound dressing, for scaffolds in tissue regrowth and even in drug delivery. “We want to know, for instance, how quickly it disintegrates, how it interacts with cells, and if you could produce it at scale,” says UCL materials scientist Gareth Williams.

But don’t expect to see nanopasta hitting the supermarket shelves anytime soon. “I don’t think it’s useful as pasta, sadly, as it would overcook in less than a second, before you could take it out of the pan,” adds Williams. And no-one likes rubbery pasta.

The post Noodles of fun as UK researchers create the world’s thinnest spaghetti appeared first on Physics World.

]]>
Blog At 372 nanometres, the “nanopasta” is not for consumption https://physicsworld.com/wp-content/uploads/2024/11/Low-Res_spaghetti-closeup.webp newsletter
Lens breakthrough paves the way for ultrathin cameras https://physicsworld.com/a/lens-breakthrough-paves-the-way-for-ultrathin-cameras/ Fri, 22 Nov 2024 12:00:23 +0000 https://physicsworld.com/?p=118365 A metasurface-based folded lens system shows promise for creating a new generation of slimline cameras

The post Lens breakthrough paves the way for ultrathin cameras appeared first on Physics World.

]]>
A research team headed up at Seoul National University has pioneered an innovative metasurface-based folded lens system, paving the way for a new generation of slimline cameras for use in smartphones and augmented/virtual reality devices.

Traditional lens modules, built from vertically stacked refractive lenses, have fundamental thickness limitations, mainly due to the need for space between lenses and the intrinsic volume of each individual lens. In an effort to overcome these restrictions, the researchers – also at Stanford University and the Korea Institute of Science and Technology – have developed a lens system using metasurface folded optics. The approach enables unprecedented manipulation of light with exceptional control of intensity, phase and polarization – all while maintaining thicknesses of less than a millimetre.

Folding the light path

As part of the research – detailed in Science Advances – the team placed metasurface optics horizontally on a glass wafer. These metasurfaces direct light through multiple folded diagonal paths within the substrate, optimizing space usage and demonstrating the feasibility of a 0.7 mm-thick lens module for ultrathin cameras.

“Most prior research has focused on understanding and developing single metasurface elements. I saw the next step as integrating and co-designing multiple metasurfaces to create entirely new optical systems, leveraging each metasurface’s unique capabilities. This was the main motivation for our paper,” says co-author Youngjin Kim, a PhD candidate in the Optical Engineering and Quantum Electronics Laboratory at Seoul National University.

According to Kim, creation of a metasurface folded lens system requires a wide range of interdisciplinary expertise, including a fundamental understanding of conventional imaging systems such as ray-optic-based lens module design, knowledge of point spread function and modulation transfer function analysis and imaging simulations – both used in imaging and optics to describe the performance of imaging systems – plus a deep awareness of the physical principles behind designing metasurfaces and the nano-fabrication techniques for constructing metasurface systems.

“In this work, we adapted traditional imaging system design techniques, using the commercial tool Zemax, for metasurface systems,” Kim adds. “We then used nanoscale simulations to design the metasurface nanostructures and, finally, we employed lithography-based nanofabrication to create a prototype sample.”

Smoothing the “camera bump”

The researchers evaluated their proposed lens system by illuminating it with an 852 nm laser, observing that it could achieve near-diffraction-limited imaging quality. The folding of the optical path length reduced the lens module thickness to half of the effective focal length (1.4 mm), overcoming inherent limitations of conventional optical systems.

“Potential applications include fully integrated, miniaturized, lightweight camera systems for augmented reality glasses, as well as solutions to the ‘camera bump’ issue in smartphones and miniaturized microscopes for in vivo imaging of live animals,” Kim explains.

Kim also highlights some more general advantages of using novel folded lens systems in devices like compact cameras, smartphones and augmented/virtual reality devices – especially when compared with existing approaches – including include the ultraslim and lightweight form factor, and the potential for mass production using standard semiconductor fabrication processes.

When it comes to further research and practical applications in this area over the next few years, Kim points out that metasurface folded optics “offer a powerful platform for light modulation” within an ultrathin form factor, particularly since the system’s thickness remains constant regardless of the number of metasurfaces used.

“Recently, there has been growing interest in co-designing hardware-based optical elements with software-based AI-based image processing for end-to-end optimization, which maximizes device functionality for specific applications,” he says. “Future research may focus on combining metasurface folded optics with end-to-end optimization to harness the strengths of both advanced hardware and AI.”

The post Lens breakthrough paves the way for ultrathin cameras appeared first on Physics World.

]]>
Research update A metasurface-based folded lens system shows promise for creating a new generation of slimline cameras https://physicsworld.com/wp-content/uploads/2024/11/22-11-24-folded-lens-Illustration.jpg
Martin Rees, Carlo Rovelli and Steven Weinberg tackle big questions to mark Oxford anniversary https://physicsworld.com/a/martin-rees-carlo-rovelli-and-steven-weinberg-tackle-big-questions-to-mark-oxford-anniversary/ Fri, 22 Nov 2024 09:00:35 +0000 https://physicsworld.com/?p=118318 Milestone marked by commemorative volume in Journal of Physics: Conference Series

The post Martin Rees, Carlo Rovelli and Steven Weinberg tackle big questions to mark Oxford anniversary appeared first on Physics World.

]]>
If you want to read about controversies in physics, a (brief) history of the speed of light or the quest for dark matter, then make sure to check out this collection of papers to mark the 10th anniversary of the St Cross Centre for the History and Philosophy of Physics (HAPP).

HAPP was co-founded in 2014 by Jo Ashbourn and James Dodd and since then the centre has run a series of one-day conferences as well as standalone lectures and seminars about big topics in physics and philosophy.

Based on these contributions, HAPP has now published a 10th anniversary commemorative volume in the open-access Journal of Physics: Conference Series, which is published by IOP Publishing.

The volume is structured around four themes: physicists across history; space and astronomy; philosophical perspectives; and concepts in physics.

The big names in physics to write for the volume include Martin Rees on the search for extraterrestrial intelligence across a century; Carlo Rovelli on scientific thinking across the centuries; and the late Steven Weinberg on the greatest physics discoveries of the 20th century.

I was delighted to also contribute to the volume based on a talk I gave in February 2020 for a one-day HAPP meeting about big science in physics.

The conference covered the past, present and future of big science and I spoke about the coming decade of new facilities in physics and the possible science that may result. I also included my “top 10 facilities to watch” for the coming decade.

In a preface to the volume, Ashbourn writes that HAPP was founded to provide “a forum in which the philosophy and methodologies that inform how current research in physics is undertaken would be included alongside the history of the discipline in an accessible way that could engage the general public as well as scientists, historians and philosophers,” adding that she is “looking forward” to HAPP’s second decade.

The post Martin Rees, Carlo Rovelli and Steven Weinberg tackle big questions to mark Oxford anniversary appeared first on Physics World.

]]>
Blog Milestone marked by commemorative volume in Journal of Physics: Conference Series https://physicsworld.com/wp-content/uploads/2024/11/Oxford-Bodleian-Library-9067522-Shutterstock.jpg newsletter1
Top-cited authors from North America share their tips for boosting research impact https://physicsworld.com/a/top-cited-authors-from-north-america-share-their-tips-for-boosting-research-impact/ Thu, 21 Nov 2024 21:00:03 +0000 https://physicsworld.com/?p=118289 Sarah Vigeland, Stephen Taylor and Carl White discuss the importance of citation metrics

The post Top-cited authors from North America share their tips for boosting research impact appeared first on Physics World.

]]>
More than 80 papers from North America have been recognized with a Top Cited Paper award for 2024 from IOP Publishing, which publishes Physics World. The prize is given to corresponding authors who have papers published in both IOP Publishing and its partners’ journals from 2021 to 2023 that are in the top 1% of the most cited papers.

Among the awardees are astrophysicists Sarah Vigeland and Stephen Taylor who are co-authors of the winning article examining the gravitational-wave background using NANoGrav data. “This is an incredible validation of the hard work of the entire NANOGrav collaboration, who persisted over more than 15 years in the search for gravitational wave signals at wavelengths of lightyears,” says Vigeland and Taylor in a joint e-mail.

They add that the article has sparked and unexpected “interest and engagement” from the high-energy theory and cosmology communities and that the award is a “welcome surprise”.

While citations give broader visibility, the authors say that research is not impactful because of its citations alone, but rather it attracts citations because of its impact and importance.

“Nevertheless, a high citation count does signal to others that a paper is relevant and worth reading, which will attract broader audiences and new attention,” they explain, adding that factors that make a research paper highly citable is often because it is “an interesting problem” that intersects a variety of different disciplines. “Such work will attract a broad readership and make it more likely for researchers to cite a paper,” they say.

Aiming for impact

Another top-cited award winner from North America is bio-inspired engineer Carl White who is first author of the winning article about a tuna-inspired robot called Tunabot Flex. “In our paper, we designed and tested a research platform based on tuna to close the performance gap between robotic and biological systems,” says White. “Using this platform, termed Tunabot Flex, we demonstrated the role of body flexibility in high-performance swimming.”

White notes that the interdisciplinary nature of the work between engineers and biologists led to researchers from a variety of topics citing the work. “Our paper is just one example of the many studies benefitting from the rich cross-pollination of ideas to new contexts,” says White adding that the IOP Publishing award is a “great honour”.

White states that scientific knowledge grows in “irregular and interconnected” ways and tracing citations from one paper to another “provides transparency into the origins of ideas and their development”.

“My advice to researchers looking to maximize their work’s impact is to focus on a novel idea that addresses a significant need,” says White. “Innovative work fills gaps in existing literature, so you must identify a gap and then characterize its presence. Show how your work is groundbreaking by thoroughly placing it within the context of your field.”

  • For the full list of top-cited papers from North America for 2024, see here. To read the award-winning research click here and here.
  • For the full in-depth interviews with White, Vigeland and Taylor, see here.

The post Top-cited authors from North America share their tips for boosting research impact appeared first on Physics World.

]]>
Blog Sarah Vigeland, Stephen Taylor and Carl White discuss the importance of citation metrics https://physicsworld.com/wp-content/uploads/2024/11/White-Taylor-Vigeland.jpg
Quantum error correction research yields unexpected quantum gravity insights https://physicsworld.com/a/quantum-error-correction-research-yields-unexpected-quantum-gravity-insights/ Thu, 21 Nov 2024 16:00:35 +0000 https://physicsworld.com/?p=118341 Universal boundary that distinguishes effective approximate error correction codes from ineffective ones turns out to be connected to the fundamental nature of the universe

The post Quantum error correction research yields unexpected quantum gravity insights appeared first on Physics World.

]]>
In computing, quantum mechanics is a double-edged sword. While computers that use quantum bits, or qubits, can perform certain operations much faster than their classical counterparts, these qubits only maintain their quantum nature – their superpositions and entanglement – for a limited time. Beyond this so-called coherence time, interactions with the environment, or noise, lead to loss of information and errors. Worse, because quantum states cannot be copied – a consequence of quantum mechanics known as the no-cloning theorem – or directly observed without collapsing the state, correcting these errors requires more sophisticated strategies than the simple duplications used in classical computing.

One such strategy is known as an approximate quantum error correction (AQEC) code. Unlike exact QEC codes, which aim for perfect error correction, AQEC codes help quantum computers return to almost, though not exactly, their intended state. “When we can allow mild degrees of approximation, the code can be much more efficient,” explains Zi-Wen Liu, a theoretical physicist who studies quantum information and computation at China’s Tsinghua University. “This is a very worthwhile trade-off.”

The problem is that the performance and characteristics of AQEC codes are poorly understood. For instance, AQEC conventionally entails the expectation that errors will become negligible as system size increases. This can in fact be achieved simply by appending a series of redundant qubits to the logical state for random local noise; the likelihood of the logical information being affected would, in that case, be vanishingly small. However, this approach is ultimately unhelpful. This raises the questions: What separates good (that is, non-trivial) codes from bad ones? Is this dividing line universal?

Establishing a new boundary

So far, scientists have not found a general way of differentiating trivial and non-trivial AQEC codes. However, this blurry boundary motivated Liu, Daniel Gottesman of the University of Maryland, US; Jinmin Yi of Canada’s Perimeter Institute for Theoretical Physics; and Weicheng Ye at the University of British Columbia, Canada, to develop a framework for doing so.

To this end, the team established a crucial parameter called subsystem variance. This parameter describes the fluctuation of subsystems of states within the code space, and, as the team discovered, links the effectiveness of AQEC codes to a property known as quantum circuit complexity.

Circuit complexity, an important concept in both computer science and physics, represents the optimal cost of a computational process. This cost can be assessed in many ways, with the most intuitive metrics being the minimum time or the “size” of computation required to prepare a quantum state using local gate operations. For instance, how long does it take to link up the individual qubits to create the desired quantum states or transformations needed to complete a computational task?

The researchers found that if the subsystem variance falls below a certain threshold, any code within this regime is considered a nontrivial AQEC code and subject to a lower bound of circuit complexity. This finding is highly general and does not depend on the specific structures of the system. Hence, by establishing this boundary, the researchers gained a more unified framework for evaluating and using AQEC codes, allowing them to explore broader error correction schemes essential for building reliable quantum computers.

A quantum leap

But that wasn’t all. The researchers also discovered that their new AQEC theory carries implications beyond quantum computing. Notably, they found that the dividing line between trivial and non-trivial AQEC codes also arises as a universal “threshold” in other physical scenarios – suggesting that this boundary is not arbitrary but rooted in elementary laws of nature.

One such scenario is the study of topological order in condensed matter physics. Topologically ordered systems are described by entanglement conditions and their associated code properties. These conditions include long-range entanglement, which is a circuit complexity condition, and topological entanglement entropy, which quantifies the extent of long-range entanglement. The new framework clarifies the connection between these entanglement conditions and topological quantum order, allowing researchers to better understand these exotic phases of matter.

A more surprising connection, though, concerns one of the deepest questions in modern physics: how do we reconcile quantum mechanics with Einstein’s general theory of relativity? While quantum mechanics governs the behavior of particles at the smallest scales, general relativity accounts for gravity and space-time on a cosmic scale. These two pillars of modern physics have some incompatible intersections, creating challenges when applying quantum mechanics to strongly gravitational systems.

In the 1990s, a mathematical framework called the anti-de Sitter/conformal field theory correspondence (AdS/CFT) emerged as a way of using CFT to study quantum gravity even though it does not incorporate gravity. As it turns out, the way quantum information is encoded in CFT has conceptual ties to QEC. Indeed, these ties have driven recent advances in our understanding of quantum gravity.

By studying CFT systems at low energies and identifying connections between code properties and intrinsic CFT features, the researchers discovered that the CFT codes that pass their AQEC threshold might be useful for probing certain symmetries in quantum gravity. New insights from AQEC codes could even lead to new approaches to spacetime and gravity, helping to bridge the divide between quantum mechanics and general relativity.

Some big questions remain unanswered, though. One of these concerns the line between trivial and non-trivial codes. For instance, what happens to codes that live close to the boundary? The researchers plan to investigate scenarios where AQEC codes could outperform exact codes, and to explore ways to make the implications for quantum gravity more rigorous. They hope their study will inspire further explorations of AQEC’s applications to other interesting physical systems.

The research is described in Nature Physics.

The post Quantum error correction research yields unexpected quantum gravity insights appeared first on Physics World.

]]>
Research update Universal boundary that distinguishes effective approximate error correction codes from ineffective ones turns out to be connected to the fundamental nature of the universe https://physicsworld.com/wp-content/uploads/2024/11/gravity-and-particles-1754139353-Shutterstock_Evgenia-Fux1.jpg newsletter1
Mechanical qubit could be used in quantum sensors and quantum memories https://physicsworld.com/a/mechanical-qubit-could-be-used-in-quantum-sensors-and-quantum-memories/ Thu, 21 Nov 2024 13:08:31 +0000 https://physicsworld.com/?p=118303 Resonator is nonlinear at the single quantum level

The post Mechanical qubit could be used in quantum sensors and quantum memories appeared first on Physics World.

]]>
Researchers in Switzerland have created a mechanical qubit using an acoustic wave resonator, marking a significant step forward in quantum acoustodynamics. The qubit is not good enough for quantum logic operations, but researchers hope that further efforts could lead to applications in quantum sensing and quantum memories.

Contemporary quantum computing platforms such as trapped ions and superconducting qubits operate according to the principles of quantum electrodynamics. In such systems, quantum information is held in electromagnetic states and transmitted using photons. In quantum acoustodynamics, however, the quantum information is stored in the quantum states of mechanical resonators. These devices interact with their surroundings via quantized vibrations (phonons), which cannot propagate through a vacuum. As a result, isolated mechanical resonators can have much longer lifetimes that their electromagnetic counterparts. This could be particularly useful for creating quantum memories.

John Teufel of the US’s National Institute for Standards and Technology (NIST) and his team shared Physics World’s 2021 Breakthrough of the Year award for using light to achieve the quantum entanglement of two mechanical resonators. “If you entangle two drums, you know that their motion is correlated beyond vacuum fluctuations,” explains Teufel. “You can do very quantum things, but what you’d really want is for these things to be nonlinear at the single-photon level – that’s more like a bit, holding one and only one excitation – if you want to do things like quantum computing. In my work that’s not a regime we’re usually ever in.”

Hitherto impossible

Several groups such as Yiwen Chu’s at ETH Zurich have interfaced electromagnetic qubits with mechanical resonators and used qubits to induce quantized mechanical excitations. Actually producing a mechanical qubit had proved hitherto impossible, however. A good qubit must have two energy levels, akin to the 1 and 0 states of a classical bit. It can then be placed (or initialized) in one of those levels and remain in a coherent superposition of the two without other levels interfering.

This is possible if the system has unevenly spaced energy levels – which is true in an atom or ion, and can be engineered in a superconducting qubit. Driving a qubit using photons with the exact transition energy then excites Rabi oscillations, in which the population of the upper level rises and falls periodically. However, acoustic resonators are harmonic oscillators, and the energy levels of a harmonic oscillator are evenly spaced. “Every time we would prepare a phonon mode into a harmonic oscillator we would jump by one energy level,” says Igor Kladarić, who is a PhD student in Chu’s group.

In the new work, Kladarić and colleagues used a superconducting transmon qubit coupled to an acoustic resonator on a sapphire chip. The frequency of the superconducting qubit was slightly off-resonance with that of the mechanical resonator. Within being driven in any way, the superconducting qubit coupled to the mechanical resonator and created a shift in the frequencies of the ground state and first excited state of the resonator. This created the desired two-level system in the resonator.

Swapping excitations

The researchers then injected microwave signals at the frequency of the mechanical resonator, converting them into acoustic signals using piezoelectric aluminium nitride. “The way we did the measurement is the way we did it beforehand,” says Kladarić. “We would simply put our superconducting qubit on resonance with our mechanical qubit to swap an excitation back into the superconducting qubit and then simply read out the superconducting qubit itself.”

The researchers confirmed that the mechanical resonator undergoes Rabi oscillations between the first and second excited states, with less than 10% probability of leakage into the second excited state, and was therefore a true mechanical qubit.

The team is now working to improve the qubit to the point where it could be useful in quantum information processing. They are also interested in the possibility of using the qubit in quantum sensing. “These mechanical systems are very massive and so…they can couple to degrees of freedom that single atoms or superconducting qubits cannot, such as gravitational forces,” explains Kladarić.

Teufel is impressed by the Swiss team’s accomplishment, “There are a very short list of strong nonlinearities in nature that are also clean and not lossy…The hard thing for any technology is to make something that’s simultaneously super-nonlinear and super-long lived, and if you do that, you’ve made a very good qubit”. He adds, “This is really the first mechanical resonator that is nonlinear at the single quantum level…It’s not a spectacular qubit yet, but the heart of this work is demonstrating that this is yet another of a very small number of technologies that can behave like a qubit.”

Warwick Bowen of Australia’s University of Queensland told Physics World, “the creation of a mechanical qubit has been a dream in the quantum community for many decades – taking the most classical of systems – a macroscopic pendulum – and converting it to the most quantum of systems, effectively an atom.”

The mechanical qubit is described in Science.

The post Mechanical qubit could be used in quantum sensors and quantum memories appeared first on Physics World.

]]>
Research update Resonator is nonlinear at the single quantum level https://physicsworld.com/wp-content/uploads/2024/11/21-11-2024-ETH-Zurich-Yiwen-Chu-cropped.jpg newsletter
Top tips for physics outreach from a prize winner, making graphene more sustainable https://physicsworld.com/a/top-tips-for-physics-outreach-from-a-prize-winner-making-graphene-more-sustainable/ Thu, 21 Nov 2024 10:12:52 +0000 https://physicsworld.com/?p=118322 This podcast features an advanced-materials expert and a prizewinning science communicator

The post Top tips for physics outreach from a prize winner, making graphene more sustainable appeared first on Physics World.

]]>
In this episode of the Physics World Weekly podcast I am in conversation with Joanne O’Meara, who has bagged a King Charles III Coronation Medal for her outstanding achievements in science education and outreach. Based at Canada’s University of Guelph, the medical physicist talks about her passion for science communication and her plans for a new science centre.

This episode also features a wide-ranging interview with Burcu Saner Okan, who is principal investigator at Sabanci University’s Sustainable Advanced Materials Research Group in Istanbul, Turkey. She explains how graphene is manufactured today and how the process can be made more sustainable – by using recycled materials as feedstocks, for example. Saner Okan also talks about her commercial endeavours including Euronova.

The post Top tips for physics outreach from a prize winner, making graphene more sustainable appeared first on Physics World.

]]>
Podcasts This podcast features an advanced-materials expert and a prizewinning science communicator https://physicsworld.com/wp-content/uploads/2024/11/Joanne-and-Burcu-list.jpg newsletter
VolkVac Instruments uses Atlas Technologies’ bi-metal expertise to create lightweight UHV suitcases https://physicsworld.com/a/volkvac-instruments-uses-atlas-technologies-bi-metal-expertise-to-create-lightweight-uhv-suitcases/ Wed, 20 Nov 2024 16:24:47 +0000 https://physicsworld.com/?p=118229 Atlas Technologies is helping VolkVac develop the next generation of its UHV suitcases

The post VolkVac Instruments uses Atlas Technologies’ bi-metal expertise to create lightweight UHV suitcases appeared first on Physics World.

]]>
UHV suitcases address an important challenge facing people who use ultrahigh vacuum (UHV) systems: it can be extremely difficult to move samples from one UHV system to another without the risk of contamination. While some UHV experiments are self contained, it is often the case that research benefits from using cutting-edge analytical techniques that are only available at large facilities such as synchrotrons, free-electron lasers and neutron sources.

Normally, fabricating a UHV sample in one place and studying it in another involves breaking the vacuum and then removing and transporting the sample. This is unsatisfactory for two reasons. First, no matter how clean a handling system is, exposing a sample to air will change or even destroy its material properties – often irrevocably. The second problem is that an opened UHV chamber must be baked out before it can be used again – and a bakeout can take several days out of a busy research schedule.

These problems can be avoided by connecting a portable UHV system (called a UHV suitcase) to the main vacuum chamber and then transferring the sample between the two. This UHV suitcase can then be used to move the sample across a university campus – or indeed, halfway around the world – where it can be transferred to another UHV system.

Ultralight aluminium UHV suitcases

While commercial designs have improved significantly over the past two decades, today’s UHV suitcases can still be heavy, unwieldy and expensive. To address these shortcomings, US-based VolkVac Instruments has developed the ULSC ultralight aluminium suitcase, which weighs less than 10 kg, and an even lighter version – the ULSC-R – which weighs in at less than 7 kg.

Key to the success of VolkVac’s UHV suitcases is the use of lightweight aluminium to create the portable vacuum chamber. The metal is used instead of stainless steel, a more conventional material for UHV chambers. As well as being lighter, aluminium is also much easier to machine. This means that VolkVac’s UHV suitcases can be efficiently machined from a single piece of aluminium. The lightweight material is also non-magnetic. This is an important feature for VolkVac because it means the suitcases can be used to transport samples with delicate magnetic properties.

Based in Escondido, California, VolkVac was founded in 2020 by the PhD physicist Igor Pinchuk. He says that the idea of a UHV suitcase is not new – pointing out that researchers have been creating their own bespoke solutions for decades. The earliest were simply standard vacuum chambers that were disconnected from one UHV system and then quickly wheeled to another – without being pumped.

This has changed in recent years with the arrival of new materials, vacuum pumps, pump controllers and batteries. It is now possible to create a lightweight, portable UHV chamber with a combination of passive and battery-powered pumps. Pinchuk explains that having an integrated pump is crucial because it is the only way to maintain a true UHV environment during transport.

Including pumps, controllers and batteries means that the material used to create the chamber of a UHV suitcase must be as light as possible to keep the overall weight to a minimum.

Aluminium is the ideal material

While aluminium is the ideal material for making UHV suitcases, it has one shortcoming – it is a relatively soft metal. Access to UHV chambers is provided by conflat flanges which have sharp circular edges that are driven into a copper-ring gasket to create an exceptionally airtight seal. The problem is that aluminium is too soft to provide durable long-lasting sharp knife edges on flanges.

This is why VolkVac has looked to Atlas Technologies for its expertise in bi-metal fabrication. Atlas fabricate aluminium flanges with titanium or stainless steel knife-edges. Because VolkVac requires non-magnetic materials for its UHV suitcases, Atlas developed titanium–aluminium flanges for the company.

Atlas Technologies’ Jimmy Stewart coordinates the company’s collaboration with VolkVac. He says that the first components for Pinchuk’s newest UHV suitcase, a custom iteration of VolkVac’s ULSC, have already been machined. He explains that VolkVac continues to work very closely with Atlas’s lead machinist and lead engineer to bring Pinchuk’s vision to life in aluminium and titanium.

Close relationship between Atlas and VolkVac

Stewart explains that this close relationship is necessary because bi-metal materials have very special requirements when it comes to things like welding and stress relief.

Stewart adds that Atlas often works like this with its customers to produce equipment that is used across a wide range of sectors including semiconductor fabrication, quantum computing and space exploration.

Because of the historical use of stainless steel in UHV systems, Stewart says that some customers have not yet used bi-metal components. “They may have heard about the benefits of bi-metal,” says Stewart, “but they don’t have the expertise. And that’s why they come to us – for our 30 years of experience and in-depth knowledge of bi-metal and aluminium vacuum.” He adds, “Atlas invented the market and pioneered the use of bi-metal components.”

Pinchuk agrees, saying that he knows stainless steel UHV technology forwards and backwards, but now he is benefitting from Atlas’s expertise in aluminium and bi-metal technology for his product development.

Three-plus decades of bi-metal expertise

Atlas Technologies was founded in 1993 by father and son Richard and Jed Bothell. Based in Port Townsend, Washington, the company specializes in creating aluminium vacuum chambers with bi-metal flanges. Atlas also designs and manufactures standard and custom bi-metal fittings for use outside of UHV applications.

Binding metals to aluminium to create vacuum components is a tricky business. The weld must be UHV compatible in terms of maintaining low pressure and not being prone to structural failure during the heating and cooling cycles of bakeout – or when components are cooled to cryogenic temperatures.

Jed Bothell points out that Japanese companies had pioneered the development of aluminium vacuum chambers but had struggled to create good-quality flanges. In the early 1990s, he was selling explosion-welded couplings and had no vacuum experience. His father, however, was familiar with the vacuum industry and realized that there was a business opportunity in creating bi-metal components for vacuum systems and other uses.

Explosion welding is a solid-phase technique whereby two plates of different metals are placed on top of each other. The top plate is then covered with an explosive material that is detonated starting at an edge. The force of the explosion pushes the plates together, plasticizing both metals and causing them to stick together. The interface between the two materials is wavy, which increases the bonded surface area and strengthens the bond.

Strong bi-metal bond

What is more, the air at the interface between the two metals is ionized, creating a plasma that travels along the interface ahead of the weld, driving out impurities before the weld is made – which further strengthens the bond. The resulting bi-metal material is then machined to create UHV flanges and other components.

As well as bonding aluminium to stainless steel, explosive welding can be used to create bi-metal structures of titanium and aluminium – avoiding the poor UHV properties of stainless steel.

“Stainless steel is bad material for vacuum in a lot of ways,” Bothell explains, He describes the hydrogen outgassing problem as “serious headwind” against using stainless steel for UHV (see box “UHV and XHV: science and industry benefit from bi-metal fabrication”). That is why Atlas developed bi-metal technologies that allow aluminium to be used in UHV components – and Bothell adds that it also shows promise for extreme high vacuum (XHV).

UHV and XHV: science and industry benefit from bi-metal fabrication

Custom vacuum chamber

Modern experiments in condensed matter physics, materials science and chemistry often involve the fabrication and characterization of atomic-scale structures on surfaces. Usually, such experiments cannot be done at atmospheric pressure because samples would be immediately contaminated by gas molecules. Instead, these studies must be done in either UHV or XHV chambers – which both operate in the near absence of air. UHV and XHV also have important industrial applications including the fabrication of semiconductor chips.

UHV systems operate at pressures in the range 10−6–10−9 pa and XHV systems work at pressures of 10−10 pa and lower. In comparison, atmospheric pressure is about 10pa.

At UHV pressures, it takes several days for a single layer (monolayer) of contaminant gases to build up on a surface – whereas surfaces in XHV will remain pristine for hundreds of days. These low pressures also allow beams of charged particles such as electrons, protons and ions to travel unperturbed by collisions with gas molecules.

Crucial roles in science and industry

As a result UHV and XHV vacuum technologies play crucial roles in particle accelerators and support powerful analytical techniques including angle resolved photoemission spectroscopy (ARPES), Auger electron spectroscopy (AES), secondary ion mass spectrometry (SIMS) and X-ray photoelectron spectroscopy (XPS).

UHV and XHV also allow exciting new materials to be created by depositing atoms or molecules on surfaces with atomic-layer precision – using techniques such as molecular beam epitaxy. This is very important in the fabrication of advanced semiconductors and other materials.

Traditionally, UHV components are made from stainless steel, whereas XHV systems are increasingly made from titanium. The latter is expensive and a much more difficult material to machine than stainless steel. As a result, titanium tends to be reserved for more specialized applications such as the X-ray lithography of semiconductor devices, particle-physics experiments and cryogenic systems. Unlike stainless steel, titanium is non-magnetic so it is also used in experiments that must be done in very low magnetic fields.

An important shortcoming of stainless steel is that the process used to create the material leaves it full of hydrogen, which finds its way into UHV chambers via a process called outgassing. Much of this hydrogen can be driven out by heating the stainless steel while the chamber is being pumped down to UHV pressures – a process called bakeout. But some hydrogen will be reabsorbed when the chamber is opened to the atmosphere, and therefore time-consuming bakeouts must be repeated every time a chamber is open.

Less hydrogen and hydrocarbon contamination

Aluminium contains about ten million times less hydrogen than stainless steel and it absorbs much less gas from the atmosphere when a UHV chamber is opened. And because aluminium contains a low amount of carbon, it results in less hydrocarbon-based contamination of the vacuum

Good thermal properties are crucial for UHV materials and aluminium conducts heat ten times better than stainless steel. This means that the chamber can be heated and cooled down much more quickly – without the undesirable hot and cold spots that affect stainless steel. As a bonus, aluminium bakeout can be done at 150 °C, whereas stainless steel must be heated to 250 °C. Furthermore, aluminium vacuum chambers retain most of the gains from previous bakeouts making them ideal for industrial applications where process up-time is highly valued.

Magnetic fields can have detrimental effects on experiments done at UHV, so aluminium’s slow magnetic permeability is ideal. The material also has low residual radioactivity and greater resistance to corrosion than stainless steel – making it favourable for use in high neutron-flux environments. Aluminium is also better at dampening vibrations than stainless steel – making delicate measurements possible.

When it comes to designing and fabricating components, aluminium is much easier to machine than stainless steel. This means that a greater variety of component shapes can be quickly made at a lower cost.

Aluminium is not as strong as stainless steel, which means more material is required. But thanks to its low density, about one third that of stainless steel, aluminium components still weigh less than their stainless steel equivalents.

All of these properties make aluminium an ideal material for vacuum components – and Atlas Technologies’ ability to create bi-metal flanges for aluminium vacuum systems means that both researchers and industrial users can gain from the UHV and XHV benefits of aluminium.

To learn more, visit atlasuhv.com or email info@atlasuhv.com.

The post VolkVac Instruments uses Atlas Technologies’ bi-metal expertise to create lightweight UHV suitcases appeared first on Physics World.

]]>
Analysis Atlas Technologies is helping VolkVac develop the next generation of its UHV suitcases https://physicsworld.com/wp-content/uploads/2024/11/VolkVac-UHV-suitcase-cutout.jpg newsletter
Magnetoelectric nanodiscs deliver non-invasive brain stimulation in mice https://physicsworld.com/a/magnetoelectric-nanodiscs-deliver-non-invasive-brain-stimulation-in-mice/ Wed, 20 Nov 2024 12:09:50 +0000 https://physicsworld.com/?p=118299 Injectable magnetoelectric nanodiscs may activate neurons in localized brain regions when stimulated by a weak external magnetic field, say MIT scientists

The post Magnetoelectric nanodiscs deliver non-invasive brain stimulation in mice appeared first on Physics World.

]]>
Magnetoelectric nanodiscs mediate neuromodulation

Scientists have been looking for ways to stimulate the brain for decades. Deep brain stimulation, for example, is an invasive technique that can be used to manage symptoms of neurological conditions including Parkinson’s disease and epilepsy. A non-invasive approach could benefit more people and possibly be deployed earlier in the course of a disease.

“For over a decade, our group has been working on magnetic approaches to control neuronal activity. However, typically these methods relied on specialized receptors – those sensing heat or tension or particular chemicals. But there’s one signal that all neurons can understand: voltage,” says corresponding author Polina Anikeeva, chair of MIT’s Department of Materials Science and Engineering and director of the K. Lisa Yang Brain-Body Center. “So, it was somewhat of a ‘holy grail’ for us to create a particle that would efficiently convert magnetic field into electrical potential.”

Ye Ji Kim, a PhD candidate and lead author on the paper, decided to tackle this problem. The result is a magnetic nanoparticle, called a magnetoelectric nanodisc (MEND), that could be injected into a specific location in the brain and stimulated with an electromagnet located outside of the body. “MENDs harness the signalling mechanisms naturally present in all neurons. This capability marks a significant advancement,” Kim explains.

MENDs, which are approximately 250 nm across, have two layers. One is a magnetostrictive core that changes shape when magnetized and induces a strain in the second layer, a piezoelectric shell. In response to this strain, the shell is electrically polarized, facilitating the delivery of electrical pulses to neurons in response to the external magnetic field.

Characterizing and testing the MENDs also required design work.

“In our simulations, we had to account for the evolution of the non-uniform magnetization and thus non-uniform strain,” says Noah Kent, a postdoctoral fellow at MIT involved in the research. “The comprehensive pipeline composed of Ye Ji’s innovative electrochemical measurements coupled with nanomagnetic simulation will be extremely valuable not only for biological applications of these materials, but more generally for the design of magnetoelectrics.”

Another scientist at MIT, Emmanuel Vargas Paniagua, facilitated tests involving mice. The scientists injected MENDs in solution into specific brain regions of mice and turned on a weak electromagnet in the vicinity to stimulate neurons. They found that MENDs could stimulate the ventral tegmental area – a deep brain region involved with feelings of reward – and the subthalamic nucleus – a brain region associated with motor control that’s typically stimulated in patients receiving deep brain stimulation for management of Parkinson’s disease. Additional results of their in vivo experiments are detailed in Nature Nanotechnology.

Characterization experiments demonstrated that the magnetostrictive effect was amplified by approximately 1000 relative to that achieved with conventional spherical particles. Meanwhile, conversion of the magnetic effect into an electrical output was only four times greater, which the scientists say suggests areas for improvement. Their next steps include applying MENDs to basic research using animal models, and they have suggested possible designs for future human models.

“These particles are very interesting from a translational standpoint, as they do not require genetic modification,” Anikeeva says. “Additionally, the magnetic fields are weak, and the frequencies are low – making electronics safe, simple and potentially portable for human patients.”

The post Magnetoelectric nanodiscs deliver non-invasive brain stimulation in mice appeared first on Physics World.

]]>
Research update Injectable magnetoelectric nanodiscs may activate neurons in localized brain regions when stimulated by a weak external magnetic field, say MIT scientists https://physicsworld.com/wp-content/uploads/2024/11/20-11-24-nanodisc-featured.jpg newsletter1
NASA’s Jet Propulsion Lab announces further staff layoffs https://physicsworld.com/a/nasas-jet-propulsion-lab-announces-further-staff-layoffs/ Tue, 19 Nov 2024 15:11:55 +0000 https://physicsworld.com/?p=118274 About 325 people, representing 5% of the lab’s employees, will be affected

The post NASA’s Jet Propulsion Lab announces further staff layoffs appeared first on Physics World.

]]>
NASA’s Jet Propulsion Laboratory (JPL) has announced another round of staff layoffs. The move, which began in mid-November, involves about 325 people, representing 5% of the lab’s employees. It follows layoffs in February of about 530 JPL staff and 140 of the lab’s outside contractors. According to JPL director Laurie Leshin, the second reduction in employees is occurring “across technical, business and support areas of the laboratory”.

JPL, which the California Institute of Technology runs for NASA, carries out many of the agency’s planetary exploration projects. These include the Europa Clipper mission, which launched in October, and the Perseverance and Curiosity Mars rovers.

The earlier layoff at JPL stemmed from uncertainty over its budget for 2024. Indeed, the Mars Sample Return (MSR) has impacted JPL’s financial flexibility. The mission has experienced a series of delays and other problems and in October 2023 a NASA review board noted that the craft’s original price tag of $4bn had risen to $5.3bn. By April 2024 the estimated price had soared to $8-11bn and the date of the samples’ arrival on Earth extended to 2040.

US Congress has not yet settled on NASA’s budget for financial year 2025, which began on 1 October, but projections of likely spending on specific NASA institutions and programmes convinced JPL’s leadership to downsize. “With lower budgets and based on the forecasted work ahead, we had to tighten our belts across the board,” Leshin wrote in a memo to employees.

Leshin notes that the number of layoffs is lower than that projected a few months ago “thanks in part to the hard work of so many people across JPL”. She points out that the election of Donald Trump to the US presidency earlier this month had no impact on the layoff decision. “[Even] though the coming leadership transition at NASA may introduce both new uncertainties and new opportunities, this action would be happening regardless of the recent election outcome,” she adds.

Leshin has reassured the lab’s staff that the current layoff should be the final one. “I believe this is the last cross-lab workforce action we will need to take in the foreseeable future,” she wrote. “After this action, we will be at about 5500 JPL regular employees. I believe this is a stable, supportable staffing level moving forward.”

The post NASA’s Jet Propulsion Lab announces further staff layoffs appeared first on Physics World.

]]>
News About 325 people, representing 5% of the lab’s employees, will be affected https://physicsworld.com/wp-content/uploads/2024/11/JPL_19-11-2024.jpg newsletter1
Nuclear shapes revealed in high-energy collisions https://physicsworld.com/a/nuclear-shapes-revealed-in-high-energy-collisions/ Tue, 19 Nov 2024 09:50:25 +0000 https://physicsworld.com/?p=118259 New technique sits on the border of nuclear and particle physics

The post Nuclear shapes revealed in high-energy collisions appeared first on Physics World.

]]>
In a groundbreaking study, scientists in the STAR Collaboration have unveiled a pioneering method for investigating the shapes of atomic nuclei by colliding them at near light-speed in particle accelerators like the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). Their innovative approach offers unprecedented insight into nuclear structure and promises to deepen our understanding of strong nuclear forces and their role in the composition of neutron stars and the evolution of the early universe.

Understanding the properties of nuclei is daunting, largely due to the complexities of quantum chromodynamics (QCD), the fundamental theory governing the strong interaction. Calculations in QCD are notoriously difficult at low relative velocities, typical for nucleons within nuclei. Given these challenges, experimental methods in this area are even more crucial than usual.

Historically, scientists relied on two primary techniques to study nuclear shapes. The first involves exciting a nucleus to a higher energy state, often by colliding it with a fixed target. By measuring how long it takes the nucleus to return to its ground state, researchers can gather information about its shape. However, this relaxation process unfolds over much longer timescales than typical nuclear interactions, thus providing only an averaged image of the nucleus and missing finer details.

Another popular method is to bombard nuclei with high-energy electrons, analysing the scattering data to infer structural details. However, this technique only reveals localized properties of the nucleus, falling short in capturing the overall shape, which depends on the coordinated movement of nucleons across the entire nucleus.

Smashing nuclei

The STAR collaboration’s approach circumvents these limitations by smashing nuclei together at extremely high energies and analysing the collision products. Since these high-energy collisions occur on timescales much shorter than typical nuclear processes, the new method promises a more detailed snapshot of nuclear shape.

When two nuclei collide at near-light speeds, they annihilate, turning into an expanding ball of plasma made of quarks and gluons – which are the fundamental building blocks of nuclear matter. This plasma lasts only about 1023 s before forming thousands of new composite particles, which are then caught by detectors. By studying the speeds and angles at which these particles are ejected, scientists can infer the shape of the colliding nuclei.

“You cannot image the same nuclei again and again because you destroy them in the collision,” explains Jiangyong Jia, a professor at Stony Brook University and one of lead authors of a paper describing the study. “But by looking at the whole collection of images from many different collisions, scientists can reconstruct the subtle properties of the 3D structure of the smashed nuclei.”

Verifying the results

To verify the reliability of this method, STAR researchers compared their findings with those obtained through established techniques on nuclei with well-known shapes. Specifically, they analysed two types of head-on collisions. These were gold–gold collisions, involving slightly oblate (flattened sphere) gold nuclei; and uranium–uranium collisions, featuring highly prolate (elongated sphere) uranium nuclei. The shapes of these nuclei are well-documented, providing benchmarks for assessing the accuracy of the high-energy approach.

The results from both types of collisions aligned remarkably well with established findings, validating the precision of this high-energy method.

Paul Garrett, who is at Canada’s University of Guelph and was not involved in the research, tells Physics World, “The fact that the high-energy collisions occur over an extraordinarily short time scale – effectively capturing the nucleus with the equivalent of an extremely high-speed camera – opens possibilities for us to see the effects of fluctuations in the nuclear shape that are very difficult to determine using low-energy probes”.

Future directions

The initial success of this new method paves the way for more extensive applications, especially with nuclei whose shapes are not as well understood. The high-energy approach holds potential for exploring finer details beyond the basic prolate or oblate characterizations. For example, it could reveal complex triaxial shapes or capture rapid, transient fluctuations in soft nuclei, offering unprecedented insights into the dynamic interactions among nucleons.

Moreover, this technique could enhance our understanding of the quark–gluon plasma, a state of matter not only produced in high-energy particle collisions but also found in the cores of neutron stars and in the universe’s earliest moments. During that primordial phase, temperatures were so extreme that protons and neutrons could not form, leaving all strongly interacting matter in a quark-gluon state.

“Indeed, I think this study is the tip of the iceberg of what the technique can do, and will ultimately be one of the groundbreaking studies in nuclear physics,” said Garrett. “Sitting on the border of traditional nuclear physics and high-energy physics, it will bring the communities together and clearly demonstrates that we have much to learn from each other.”

The research is described in Nature.

The post Nuclear shapes revealed in high-energy collisions appeared first on Physics World.

]]>
Research update New technique sits on the border of nuclear and particle physics https://physicsworld.com/wp-content/uploads/2024/11/18-11-2024-RHIC-collisions.jpg newsletter1
New modular synchronous source measure system from Lake Shore Cryotronics https://physicsworld.com/a/new-modular-synchronous-source-measure-system-from-lake-shore-cryotronics/ Mon, 18 Nov 2024 15:25:45 +0000 https://physicsworld.com/?p=118261 Lake Shore Cryotronics showcases the new M81-SSM

The post New modular synchronous source measure system from Lake Shore Cryotronics appeared first on Physics World.

]]>

This video examines the unique measurement capabilities of the modular M81-SSM synchronous source measure system from Lake Shore Cryotronics. In this hands-on demonstration, Lake Shore looks at its components, including four types of amplifier modules that are combined with the M81-SSM instrument to enable low-level DC, AC and mixed AC/DC measurements.

The video discusses how all source and measure channels are simultaneously sampled at a very high rate and provide DC to 100 kHz operation – including lock-in operation – on up to three source and three measure channels at the same time to ensure time-correlated synchronous measurements.

Also demonstrated is how quickly and easily the M81-SSM can measure various values of resistance using very low DC and AC currents, illustrating the limitations of DC methods and the advantages of AC lock-in methods as the signal of interest becomes affected by thermal offsets and other parasitic effects.

Unique MeasureSync™ signal synchronization technology

The M81-SSM’s MeasureSync™ technology ensures inherently synchronized measurements from one to three source channels and from one to three measure channels per each half-rack instrument. Amplitude and frequency signals are transmitted to/from the remote amplifier modules using a proprietary real-time analogue method that minimizes noise and ground errors while ensuring tight time and phase synchronization between all modules. Because the M81-SSM sources and measures channels synchronously, multiple devices can be tested under identical conditions so users can easily obtain time-correlated data.

Connect up to three source modules and up to three measure modules at once

The M81-SSM provides DC to 100 kHz precision electrical source and measure capabilities with 375 kHz (2.67 μs) source/measure digitization rates across up to three source and three measurement front-end modules.

Users can choose from differential voltage measure (VM-10) and balanced current source (BCS-10) modules, and single-ended current measure (CM-10) and voltage source (VS-10) modules. All modules use 100% linear amplifiers and are powered by highly isolated linear power supplies for the lowest possible voltage/current noise performance — rivalling the most sensitive lock-in amplifiers and research lab-grade source and measure instruments.

On the VS-10 module, dual AC and DC range sourcing allows for precise full control of DC and AC amplitude signals with a single module and sample/device connection. And on the VM-10 module, seamless range change measuring significantly reduces or eliminates the typical range change-induced measurement offsets/discontinuities in signal sweeping applications that require numerous range changes.

For details, visit the M81-SSM webpage at www.lakeshore.com/M81.

Lake Shore Logo

The post New modular synchronous source measure system from Lake Shore Cryotronics appeared first on Physics World.

]]>
Analysis Lake Shore Cryotronics showcases the new M81-SSM https://physicsworld.com/wp-content/uploads/2024/11/Instrument.png
Nanoflake-based breath sensor delivers ultrasensitive lung cancer screening https://physicsworld.com/a/nanoflake-based-breath-sensor-delivers-ultrasensitive-lung-cancer-screening/ Mon, 18 Nov 2024 15:00:09 +0000 https://physicsworld.com/?p=118251 Gas sensor made from nanoflakes of indium oxide-based materials successfully identifies individuals with lung cancer

The post Nanoflake-based breath sensor delivers ultrasensitive lung cancer screening appeared first on Physics World.

]]>
Gas sensing cell

Analysis of human breath can provide a non-invasive method for cancer screening or disease diagnosis. The level of isoprene in exhaled breath, for example, provides a biomarker that can indicate the presence of lung cancer. Now a research collaboration from China and Spain has used nanoflakes of indium oxide (In2O3)-based materials to create a gas sensor with the highest performance of any isoprene sensor reported to date.

For effective cancer screening or diagnosis, a gas sensor must be sensitive enough to detect the small amounts of isoprene present in breath (in the parts-per-billion (ppb) range) and able to differentiate isoprene from other exhaled compounds. The metal oxide semiconductor In2O3 is a promising candidate for isoprene sensing, but existing devices are limited by high operating temperatures and low detection limits.

SEM micrograph of nanoflakes

To optimize the sensing performance, the research team – led by Pingwei Liu from Zhejiang University and Qingyue Wang from Institute of Zhejiang University – developed a series of sensors made from nanoflakes of pure In2O3, nickel-doped (InNiOx) or platinum-loaded (Pt@InNiOx). The sensors comprise an insulating substrate with interdigitated gold/titanium electrodes, coated with a layer of roughly 10 nm-thick nanoflakes. When the sensor is exposed to isoprene, adsorption of isoprene onto the nanoflakes causes an increase in the detected electrical signal.

“The nanoflakes’ two-dimensional structure provides a relatively high surface area and pore volume compared with the bulk structure, thus promoting isoprene adsorption and enhancing electron interaction and electrical signals,” Wang explains. “This improves the sensitivity of the gas sensor.”

The researchers – also from Second Affiliated Hospital, Zhejiang University School of Medicine and Instituto de Catálisis y Petroleoquímica, CSIC – assessed the isoprene sensing performance of the various sensor chips. All three exhibited a linear response to isoprene concentrations ranging from 500 ppb to the limit-of-detection (LOD) at the operating temperature of 200 °C. Pt@InNiOx showed a response at least four times higher than InNiOx and In2O3, as well as an exceptionally low LOD of 2 ppb, greatly outperforming any previously reported sensors.

The Pt@InNiOx sensor also showed high selectivity, exhibiting 3–7 times higher response to isoprene than to other volatile organic compounds commonly found in breath. Pt@InNiOx also exhibited good repeatability over nine cycles of 500 ppb isoprene sensing.

The team next examined how humidity affects the sensors – an important factor as exhaled breath usually has a relative humidity above 65%. The InNiOx and Pt@InNiOx sensors maintained a stable current baseline in the presence of water vapour. In contrast, the In2O3 sensor showed more than a 100% baseline increase. Similarly, the isoprene sensing performance of InNiOx and Pt@InNiOx was unaffected by water vapor, while the In2O3 response decreased to less than 0.5% as relative humidity reached 80%.

The team also used simultaneous spectroscopic and electrical measurements to investigate the isoprene sensing mechanism. They found that nanoclusters of platinum in the nanoflakes play a pivotal role by catalysing the oxidation of isoprene C=C bonds, which releases electrons and triggers the isoprene-sensing process.

Clinical testing

As the performance tests indicated that Pt@InNiOx may provide an optimal sensing material for detecting ultralow levels of isoprene, the researchers integrated Pt@InNiOx nanoflakes into a portable breath sensing device. They collected exhaled breath from eight healthy individuals and five lung cancer patients, and then transferred the exhaled gases from the gas collection bags into the digital device, which displays the isoprene concentration on its screen.

The sensing device revealed that exhaled isoprene concentrations in lung cancer patients were consistently below 40 ppb, compared with more than 60 ppb in healthy individuals. As such, the device successfully distinguished individuals with lung cancer from healthy people.

“These findings underscore the effectiveness of the Pt@InNiOx sensor in real-world scenarios, validating its potential for rapid and cost-effective lung cancer diagnosis,” the researchers write. “Integrating this ultrasensitive sensing material into a portable device holds significant implications for at-home surveillance for lung cancer patients, enabling dynamic monitoring of their health status.”

Looking to future commercialization of this technology, the researchers note that this will require further research on the sensing materials and the relationship between breath isoprene levels and lung cancer. “By addressing these areas and finishing the rigorous clinical trials, breath isoprene gas sensing technology could become a transformative tool in the noninvasive detection of lung cancer, ultimately saving lives and improving healthcare,” they conclude.

“Currently, we’re cooperating with a local hospital for large-scale clinical testing and evaluating the potentials to be applied for other cancers such as prostate cancer,” Wang tells Physics World.

The researchers report their findings in ACS Sensors.

The post Nanoflake-based breath sensor delivers ultrasensitive lung cancer screening appeared first on Physics World.

]]>
Research update Gas sensor made from nanoflakes of indium oxide-based materials successfully identifies individuals with lung cancer https://physicsworld.com/wp-content/uploads/2024/11/18-11-24-breath-sensor-fig3a-featured.jpg newsletter1
Why we need more pride in physics https://physicsworld.com/a/why-we-need-more-pride-in-physics/ Mon, 18 Nov 2024 07:00:12 +0000 https://physicsworld.com/?p=117490 Artemis Peck and Wendy Sadler explain that we need more initiatives to make sure queer people feel comfortable and welcome in science

The post Why we need more pride in physics appeared first on Physics World.

]]>
Ask the average person in the street to describe a physicist and they will probably outline an eccentric older man with grey wiry hair wearing a lab coat or tweed jacket with elbow patches and a pair of glasses. While some members of the physics community do look like that – and there’s nothing wrong with it if they do – it’s certainly not representative of the whole. Indeed, since the 1960s researchers have been regularly testing children’s perceptions of scientists with the “draw-a-scientist test”. This has seen a decrease in “masculine-coded” results from 99.4% in the 1970s to 73% in 2018. That figure is still high, but the drop is a welcome development that is likely due to an increase in female scientists being featured in both traditional and social media.

Despite such progress, however, physics still comes across as a cisgender-heterosexual-dominated subject. Some may claim that science doesn’t care about identity and, yes, in an ideal world this would be true – you would leave identity at the lab door and just get on with doing physics. Yet this is a classic example of inequity. While treating everybody the same sounds great in practice, a one-size-fits-all approach doesn’t create a conducive atmosphere for work and study. So how do we encourage the queer community into science and make them feel more comfortable?

To find out, we surveyed 160 students and staff at UK universities who identify as queer about their experiences and inspirations. When asked to rate how comfortable queer people feel in different scenarios between one (“completely uncomfortable) and 10 (“completely comfortable”), respondents’ average score was 7.96 when it came to how they felt among their peers but just 5.66 in an academic setting. This difference was even starker with people who identify as transgender, who reported a score of 8.0 with peers and as low as 4.96 within academia.

We also did follow-up interviews with respondents who left contact information to get a more detailed picture. From these interviews, the idea of “belonging” came up a lot. Participants stated that if they don’t see people like them at a job interview, they will think twice about accepting a position in that organization. Almost half of transgender respondents say they will have difficulty getting into a science-related career compared with just 8.9% of queer cisgender respondents.

The lack of role models in science is a critical factor. Over three-quarters of respondents generally disagreed with the statement “there are enough queer role models in STEM”, with some saying it is “severely lacking” while also acknowledging how complicated it can be for queer people to put themselves “out there”.

While teachers are an important inspiration for both transgender and cisgender people, fictional role models play a greater role for transgender people. On a scale from one (being no influence) to seven (most influence), transgender people were slightly more inclined towards fictional role models than cisgender people (at 4.25 versus 3.52). This is an important avenue for transgender people through the “queer coding” of traditionally cisgender heterosexual characters. One of the survey responses explained how as a child they interpreted The Doctor from TV’s Doctor Who as a queer role model.

Targeted schemes

Queer people clearly do not feel well represented in science, neither within their institutions nor in the media. The solutions to both issues are intertwined. The media will not see an increase in queer scientists until we have more queer scientists, and we won’t have more queer scientists until queer people can see science as a safe and welcoming career option. Time magazine’s top 100 influential people for 2020, for example, contained 17 scientists, but the Guardian’s list of LGBTQ+ influencers for 2024 contained no scientists at all.

There are things we can do to make science more accepting on a personal level such as displaying pronouns as standard in all communication, and signposting to queer networks within or beyond our organizations. One interviewee suggested queer people wear something like a Pride pin badge to create more visibility within the science community so that newly recruited queer people feel like they belong.

We also need targeted outreach to queer audiences in a similar way to how schemes have been created to increase women’s participation in science. Local Pride events or queer youth group meetings could be a good way to reach queer people without making them feel singled out and “othered”. The Institute of Physics, which publishes Physics World, regularly attends Pride events, for example, and this type of activity should be encouraged in other physics and science-based groups and industries to show they are actively seeking and welcoming connections and talent from the queer community.

As well as increasing access to real-life role models, fiction could be used to create accessible role models, especially for the transgender community. More scientific characters in films, books and TV series who identify as queer would help to give future queer scientists people they can relate to and help them feel they belong in science. By making these small but meaningful changes in institutions and supporting related cultural initiatives, we can show that science can indeed be for everybody and not just a select few.

  • This article is based on the results of a final year BSc project by Artemis Peck.

The post Why we need more pride in physics appeared first on Physics World.

]]>
Opinion and reviews Artemis Peck and Wendy Sadler explain that we need more initiatives to make sure queer people feel comfortable and welcome in science https://physicsworld.com/wp-content/uploads/2024/11/2024-11-Forum-Progress-Pride-flag-intersex-inclusive-2083204426-Shutterstock_Svet-foto.jpg newsletter
Quantum showcase sets out next decade of UK quantum https://physicsworld.com/a/quantum-showcase-sets-out-next-decade-of-uk-quantum/ Fri, 15 Nov 2024 14:05:30 +0000 https://physicsworld.com/?p=118204 The UK National Quantum Technologies Showcase set out a bold ambition for the sector. Katherine Skipper asks whether the science is ready to deliver

The post Quantum showcase sets out next decade of UK quantum appeared first on Physics World.

]]>
Described as “the Glastonbury of quantum events” by one speaker, the UK National Quantum Technologies Showcase 2024 last week was the first time I have ever queued for a physics event. Essentially a quantum trade show, the showcase has been running for a decade, and in that time its attendance has grown from 100 to nearly 2000. It’s run by Innovate UK in collaboration with the Engineering and Physical Sciences Research Council (EPSRC) and the UK National Quantum Technologies Programme (NQTP).

Nearly 100 quantum companies exhibited and there were talks and panels throughout the day. The mood was triumphant – last year the UK government announced the next phase of the NQTP, backed by a £2.5 billion 10-year quantum strategy, and in September, five quantum hubs were launched at British universities (with some overlap with the four previous hubs). However, for a sector that’s still finding its feet, the increasing focus on commercialization and industry creates some interesting tensions.

Commitment to quantum

Most of the funding for quantum technologies research in the UK comes from the public sector, and in the wake of the election of a new government, the organizers clearly felt a need to assuage post-election jitters.

The first speaker was Dave Smith, the UK’s national technology adviser, who gave an ambitious outline of the next decade of the government’s quantum strategy, which he expects to “grow the economy and make people’s lives better”. To do this, the UK quantum sector needs two things: talent and money. Smith’s speech focussed on the need to attract overseas talent, train apprentices and PhD students, and encourage private investors to dip their toes into quantum.

“We’ve gone from the preserve of academia to real-world applications” said Stella Peace, the recently appointed interim executive chair of Innovate UK, who spoke next. Her address made similar points to Smith, emphasizing that as well as funding quantum directly, Innovate UK aims to create connections between academia and industry that will grow the sector.

One senior figure with experience of the industry, government and academia aspects of quantum technology is the physicist Peter Knight from Imperial College London, who has been involved in the NQTP since it started and is now the chair of its strategic advisory board. Knight gave an insightful first-hand account of the last decade of the UK’s quantum programme. He said he was reassured that the new government is committed to quantum technology, but as with anything involving billions of pounds, making this a priority hasn’t been easy and Knight’s work is far from over. He described the researchers who led the first quantum hubs as “heroes” but added that “you can be heroic and fail”. According to Knight, to realize the potential of quantum technologies, “we need more than heroes, we need money”.

I spent the rest of the day alternating between the exhibition area and the talks. I saw established companies like Toshiba and British Telecom (BT) that are branching into quantum, as well as start-ups including Phasecraft and Quantum Dice.

A lively panel event on quantum skills was a particular highlight. The quantum sector faces a shortage of engineers, and the panellists debated whether quantum science should be integrated into existing engineering degrees and apprenticeships. A dissenting voice came from Rhys Morgan, the director of engineering and education at the Royal Academy of Engineering. “I’m not sure I agree with the need for a quantum apprenticeship,” he said, arguing that quantum companies should be training engineers on the job rather than expecting them to specialize during their degree.

Quantum at the crossroads

The UK government plans to invest £2.5bn in quantum technologies over the next decade and wants to attract an additional £1bn from private investment. The goal is to achieve a “quantum-enabled economy” by 2033. “Over the next 10 years,” states the National Quantum Strategy, “quantum technologies will revolutionize many aspects of life in the UK and bring enormous benefits to the UK economy, society and the way we can protect our planet.”

This is a bold statement. It sounds like the government expects to start getting a return on its quantum investment in the near future. But is that realistic?

“Quantum technologies” is an imprecise term, but where it refers to computing and communications, it’s still firmly in the research phase of research and development. Even quantum sensing start-ups like Cerca Magnetics and Delta G are just starting to move towards commercialization. Quantum research has made huge strides but scientists and companies should be realistic about its current capabilities and advocate for space and time to explore work that might not come to fruition in the next decade.

This was summed up in the final address from Roger Mckinley, the quantum technologies challenge director at UK Research and Innovation (UKRI). His message to the government was that quantum commercialization is going to happen, but that they need to ask themselves: “How much do you want this to happen in the UK?”

Whatever you think about the hype over quantum technologies, researchers in the UK can celebrate the last decade, in which the country has punched above its weight in terms of quantum investment and research. However, there’s a lot of work still to do. If quantum researchers are serious about bringing these technologies to the real world, they should be prepared to keep fighting for them.

The post Quantum showcase sets out next decade of UK quantum appeared first on Physics World.

]]>
Blog The UK National Quantum Technologies Showcase set out a bold ambition for the sector. Katherine Skipper asks whether the science is ready to deliver https://physicsworld.com/wp-content/uploads/2024/11/20241108_110943-scaled.jpg newsletter
How Albert Einstein and John Bell inspired Artur Ekert’s breakthrough in quantum cryptography https://physicsworld.com/a/how-albert-einstein-and-john-bell-inspired-artur-ekerts-breakthrough-in-quantum-cryptography/ Fri, 15 Nov 2024 12:00:21 +0000 https://physicsworld.com/?p=118219 The quantum physicist gives the Royal Society Milner Prize Lecture

The post How Albert Einstein and John Bell inspired Artur Ekert’s breakthrough in quantum cryptography appeared first on Physics World.

]]>
If you love science and are near London, the Royal Society runs a wonderful series of public events that are free of charge. This week, I had the pleasure of attending the Royal Society Milner Prize Lecture, which was given by the quantum cryptography pioneer Artur Ekert. The prize is described as “the premier European award for outstanding achievement in computer science” and his lecture was called “Privacy for the paranoid ones: the ultimate limits of secrecy“. I travelled up from Bristol to see the lecture and I enjoyed it very much.

Ekert has academic appointments at the University of Oxford, the National University of Singapore and the Okinawa Institute of Technology. He bagged this year’s prize, “For his pioneering contributions to quantum communication and computation, which transformed the field of quantum information science from a niche academic activity into a vibrant interdisciplinary field of industrial relevance”.

Ekert is perhaps most famous for his invention in 1991 of entanglement-based quantum cryptography. However, his lecture kicked-off several millennia earlier with an example of a permutation cypher called a scytale. Used by the ancient Greeks, the cypher conceals a message in a series of letters written on a strip of paper. When the paper is wound around a cylinder of the correct radius, the message appears – so not that difficult to decipher if you have a set of cylinders of different radii.

Several hundred years later things had improved somewhat, with the Romans using substitution cyphers whereby letters are substituted for each other according to a secret key that is shared by sender and receiver. The problem with this, explained Ekert, is that if the same key is used to encrypt multiple messages, patterns will emerge in the secret messages. For example, “e” is the most common letter in English, and if it is substituted by “p”, then that letter will be the most common letter in the encrypted messages.

Maths and codebreaking

Ekert said that this statistical codebreaking technique was developed in the 9th century by the Arab polymath Al-Kindi. This appears to be the start of the centuries-long relationship between mathematicians and code makers and breakers that thrives today at places like the UK’s Government Communications Headquarters (GCHQ).

Substitution cyphers can be improved by constantly changing the key, but then the problem becomes how to distribute keys in a secure way – and that’s where quantum physics comes in. While classical key distribution protocols like RSA are very difficult to crack, quantum protocols can be proven to be unbreakable – assuming that they are implemented properly.

Ekert’s entanglement-based protocol is called E91, and he explained how it has its roots in the Einstein–Podolsky–Rosen (EPR) paradox. This is a thought experiment that was devised in 1935 by Albert Einstein and colleagues to show that quantum mechanics was “incomplete” in how it described reality. They argued that classical physics with extra “hidden variables” could explain correlations that arise when measurements are made on two particles that are in what we now call a quantum-entangled state.

Ekert then fast-forwarded nearly three decades to 1964, when the Northern Irish physicist John Bell came up with a mathematical framework to test whether an entangled quantum state can indeed be described using classical physics and hidden variables. Starting in the 1970s, physicists did a series of experiments called Bell tests that have established that correlations observed in quantum systems cannot be explained by classical physics and hidden variables. This work led to John Clauser, Alain Aspect and Anton Zeilinger sharing the 2022 Nobel Prize for Physics.

Test for eavesdropping

In 1991, Ekert realised that a Bell test could be used to reveal whether a secret communication using entangled photons had been intercepted by an eavesdropper. The idea is that the eavesdropper’s act of measurement would destroy entanglement and leave the photon pairs with classical, rather than quantum, correlations.

That year, Ekert along with John Rarity and Paul Tapster demonstrated E91 at the UK’s Defence Research Agency in Malvern. In the intervening decades E91 and other quantum key distribution (QKD) protocols have been implemented in a number of different scenarios – including satellite communications – and some QKD protocols are commercially available.

However, Ekert points out that quantum solutions are not available for all cryptographic applications – they tend to work best for the exchange of messages, rather than the password protection of documents, for example. He also said that developers and users must ensure that QKD protocols are implemented properly using equipment that works as expected. Indeed, Ekert points out that the current interest in identifying and closing “Bell loopholes” is related to QKD. Loopholes are situations where classical phenomena could inadvertently affect a Bell test, making a classical system appear quantum.

So, there is much more work for Ekert and his colleagues to do in quantum cryptography. And if the enthusiasm of his talk is any indication, Ekert is up for the challenge.

The post How Albert Einstein and John Bell inspired Artur Ekert’s breakthrough in quantum cryptography appeared first on Physics World.

]]>
Blog The quantum physicist gives the Royal Society Milner Prize Lecture https://physicsworld.com/wp-content/uploads/2024/11/15-11-2024-Artur-Ekert.jpg newsletter1
Physicists in cancer radiotherapy https://physicsworld.com/a/physicists-in-cancer-radiotherapy/ Fri, 15 Nov 2024 10:57:21 +0000 https://physicsworld.com/?p=118224 Introducing a new master’s in cancer research from the University of Manchester

The post Physicists in cancer radiotherapy appeared first on Physics World.

]]>
The programme focuses on the cancer radiation therapy patient pathway, with the aim of equipping students with the skills to progress onto careers in clinical, academic research or commercial medical physics opportunities.

Alan McWilliam, programme director of the new course, is also a reader in translational radiotherapy physics. He explains: “Radiotherapy is a mainstay of cancer treatment, used in around 50% of all treatments, and can be used together with surgery or systemic treatments like chemotherapy or immunotherapy. With a heritage dating back over 100 years, radiotherapy is now highly technical, allowing the radiation to be delivered with pin-point accuracy and is increasingly interdisciplinary to ensure a high-quality, curative delivery of radiation to every patient.”

“This new course builds on the research expertise at Manchester and benefits from being part of one of the largest university cancer departments in Europe, covering all aspects of cancer research. We believe this master’s reflects the modern field of medical physics, spanning the multidisciplinary nature of the field.”

Cancer pioneers

Manchester has a long history of developing solutions to drive improvements in healthcare, patients’ lives and the wellbeing of individuals. This new course draws on scientific research and innovation to equip those interested in a career in medical physics or cancer research with specialist skills that draw on a breadth of knowledge.  Indeed, the course units bring together expertise from academics that have pioneered, amongst other work, the use of image-guided radiotherapy, big data analysis using real-world radiotherapy data, novel MR imaging for tracking oxygenation of tumours during radiotherapy, and proton research beam lines. Students will benefit directly from this network of research groups by being able to join research seminars throughout the course.

Working with clinical scientists

The master’s course is taught together with clinical physicists from The Christie NHS Foundation Trust, one of the largest single-site cancer hospitals in Europe and the only UK cancer hospital connected directly to a research institute. The radiotherapy department currently has 16 linear accelerators across four sites, an MR-guided radiotherapy service and one of the two NHS high-energy proton beam services. The Christie is currently one of only two cancer centres in the world with access to both proton beam and an MR-guided linear accelerator. For students, this partnership provides the opportunity to work with people at the forefront of cancer treatment developments.

To reflect the current state of radiotherapy, the University of Manchester has worked with The Christie to ensure students gain the skills necessary for a successful, modern, medical physics career. Units have a strong clinical focus, with access to technology that allows students to experience and learn from clinical workflows.

Students will learn the fundamentals of how radiotherapy works, from interactions of X-rays and matter, through X-ray beam generation control and measurement, and to how treatments are planned. Complementary to X-ray therapy, students will learn about the concepts of proton beam therapy, how the delivery of protons is different from X-rays, and the potential clinical benefits and unique difficulties of protons due to greater uncertainties from how protons interact with matter.

Delivering radiation with pin-point accuracy

The course will provide an in-depth understanding of how imaging can be used throughout the patient pathway to aid treatment decisions and guide the delivery of radiation.

The utility of CT, MRI and PET scanners across clinical pathways is explored, and the area of radiation delivery is complemented by material on radiobiology – how cells and tissues respond to radiation.

The difference between the response of tumours and normal tissue to radiation is called the therapeutic ratio. The radiobiology teaching will focus on how to maximize this ratio, essentially how to improve cure whilst minimising the risk of side-effects due to irradiation of nearby normal tissues. Students will also explore how this ratio could be enhanced or modified to improve the efficacy of all forms of radiotherapy.

Research and technology

A core strength of the research groups in Manchester is the use of routinely collected data in the evaluation of improvements in treatment delivery or the clinical translation of research findings. Many such improvements do not qualify for a full randomized clinical trial. However, there are many pragmatic methods to evaluate clinical benefit. Through studying clinical workflows and translation, these concepts will be explored along with investigating how to maximise results from all available data.

Modern medical physicists need an appreciation of artificial intelligence (AI). AI is emerging as an automation tool throughout the radiation therapy workflow; for example, segmentation of tissues, radiotherapy planning and quality assurance. This course delves into the fundamentals of AI and machine learning, giving students the opportunity to implement their own solution for image classification or image segmentation. For those with leadership aspirations, guest lecturers from various academic, clinical or commercial backgrounds will detail career routes and how to develop knowledge in this area.

Pioneering new learning and assessments

Programme director Alan McWilliam talks us through the design of the course and how students are evaluated:

“An aspect of the teaching we are particularly proud of is the design of the assessments throughout the units. Gone are written exams, with assessments allowing students to apply their new knowledge to real medical physics problems. Students will perform dosimetric calculations and Monte Carlo simulations of proton depositions, as well as build an image registration pipeline and pitch for funding in a dragon’s den (or shark tank) scenario. This form of assessment will allow students to demonstrate skills directly useful for future career pathways.”

“The final part of the course is the research project, to take place after the taught elements are complete. Students will choose from projects which will embed them with one of the academic or clinical groups. Examples for the current cohort include training an AI segmentation model for muscle in CT images and associating this with treatment outcomes; simulating prompt gamma rays from proton deliveries for dose verification; and assisting with commissioning MR-guided workflows for ultra-central lung treatments.”

Develop your specialist skills

The Medical Physics in Cancer Radiation Therapy MSc is a one-year full-time (two-year part-time) programme at the University of Manchester.

Applications are now open for the next academic year, and it is recommended to apply early, as applications may close if the course is full.

Find out more and apply: https://uom.link/medphyscancer 

The post Physicists in cancer radiotherapy appeared first on Physics World.

]]>
Employer-supplied feature Introducing a new master’s in cancer research from the University of Manchester https://physicsworld.com/wp-content/uploads/2024/11/WEB-planning.png
EU must double its science budget to remain competitive, warns report https://physicsworld.com/a/eu-must-double-its-science-budget-to-remain-competitive-warns-report/ Fri, 15 Nov 2024 09:00:19 +0000 https://physicsworld.com/?p=118205 A committee of 15 experts from research and industry call for Framework Programme 10 to have a budget of at least €220bn

The post EU must double its science budget to remain competitive, warns report appeared first on Physics World.

]]>
The European Union should more than double its budget for research and innovation in its next spending round, dubbed Framework Programme 10 (FP10). That’s the view of a report by an expert group, which says a dramatic increase to €220bn is needed for European science to be globally competitive once again. Its recommendations are expected to have a big influence over the European Commission’s proposals for FP10, due in mid-2025.

The EU’s current Horizon Europe programme, which runs from 2021 to 2027, has a budget of €95.5bn. In December 2023, the Commission picked 15 experts from research and industry – led by former Portuguese science minister Manuel Heitor – to advise on FP10, which is set to run from 2028 to 2034. According to their report, Europe is lagging behind in investment and impact in science, technology and innovation.

It says Europe’s share of global scientific publications, most-cited publications and patent applications have dropped over the last 20 years. Europe’s technology base, it claims, is more diverse than other major economies, but also more focused on less complex technologies. China and the US, in contrast, lead in areas expected to drive future growth, such as semiconductors, optics, digital communications and audio-visual technologies.

The experts also say the “disruptive, paradigm shifting research and innovation” that Europe needs to boast it economies is “unlikely to be fostered by conventional procedures and programmes in the EU today”. They want the EU to set up an experimental unit to test and launch disruptive innovation programmes with “fast funding” options. It should develop programmes like those of the US advanced research projects agencies and explore how generative AI could be used in science.

Based on analysis of previous unfunded proposals, the report claims that FP10’s budget should be doubled to €220bn to “guarantee funding of all high-quality proposals”. It also says that funding applications need to be simplified and streamlined, with funding handed out more quickly. It also calls for better international collaborations, including with China, and disruptive innovation programmes, such as on military-civilian “dual-use” innovation.

Launching the report, Heitor said there was a need “to put research technology and innovation in the centre of European economies”, adding that the expert group was calling for “radical simplification and innovation” for the next programme. Europe needs to pursue a “transformative agenda” in FP10 around four interlinked areas: competitive excellence in science and innovation; industrial competitiveness; societal challenges; and a strong European research and innovation ecosystem.

The post EU must double its science budget to remain competitive, warns report appeared first on Physics World.

]]>
News A committee of 15 experts from research and industry call for Framework Programme 10 to have a budget of at least €220bn https://physicsworld.com/wp-content/uploads/2024/11/funding-euros-web-11464558_iStock_Sagadogo.jpg 1
Space travel: the health effects of space radiation and building a lunar GPS https://physicsworld.com/a/space-travel-the-health-effects-of-space-radiation-and-building-a-lunar-gps/ Thu, 14 Nov 2024 18:06:02 +0000 https://physicsworld.com/?p=118214 We chat to a radiation oncologist and two atomic-clock experts

The post Space travel: the health effects of space radiation and building a lunar GPS appeared first on Physics World.

]]>
We are entering a second golden age of space travel – with human missions to the Moon and Mars planned for the near future. In this episode of the Physics World Weekly podcast we explore two very different challenges facing the next generation of cosmic explorers.

First up, the radiation oncologist James Welsh chats with Physics World’s Tami Freeman about his new ebook about the biological effects of space radiation on astronauts. They talk about the types and origins of space radiation and how they impact human health. Despite the real dangers, Welsh explains that the human body appears to be more resilient to radiation than are the microelectronics used on spacecraft. Based at Loyola Medicine in the US, Welsh explains why damage to computers, rather than the health of astronauts, could be the limiting factor for space exploration.

Later in the episode I am in conversation with two physicists who have written a paper about how we could implement a universal time standard for the Moon. Based at the US’s National Institute of Standards and Technology (NIST), Biju Patla and Neil Ashby, explain how atomic clocks could be used to create a time system that would making coordinating lunar activities easier – and could operate as a GPS-like system to facilitate navigation. They also say that such a lunar system could be a prototype for a more ambitious system on Mars.

The post Space travel: the health effects of space radiation and building a lunar GPS appeared first on Physics World.

]]>
Podcasts We chat to a radiation oncologist and two atomic-clock experts https://physicsworld.com/wp-content/uploads/2024/11/14-11-2024-Mars-settlement-colony-1433158079-Shutterstock_Dotted-Yeti.jpg newsletter
Hybrid irradiation could facilitate clinical translation of FLASH radiotherapy https://physicsworld.com/a/hybrid-irradiation-could-facilitate-clinical-translation-of-flash-radiotherapy/ Thu, 14 Nov 2024 11:30:31 +0000 https://physicsworld.com/?p=118189 A combination of ultrahigh-dose rate electron and conventional photon radiotherapy could enable FLASH treatments of deep-seated tumours

The post Hybrid irradiation could facilitate clinical translation of FLASH radiotherapy appeared first on Physics World.

]]>
Dosimetric comparisons of prostate cancer treatment plans

FLASH radiotherapy is an emerging cancer treatment that delivers radiation at extremely high dose rates within a fraction of a second. This innovative radiation delivery technique, dramatically faster than conventional radiotherapy, reduces radiation injury to surrounding healthy tissues while effectively targeting malignant tumour cells.

Preclinical studies of laboratory animals have demonstrated that FLASH radiotherapy is at least equivalent to conventional radiotherapy, and may produce better anti-tumour effects in some types of cancer. The biological “FLASH effect”, which is observed for ultrahigh-dose rate (UHDR) irradiations, spares normal tissue compared with conventional dose rate (CDR) irradiations, while retaining the tumour toxicity.

With FLASH radiotherapy opening up the therapeutic window, it has potential to benefit patients requiring radiotherapy. As such, efforts are underway worldwide to overcome the clinical challenges for safe adoption of FLASH into clinical practice. As the FLASH effect has been mostly investigated using broad UHDR electron beams, which have limited range and are best suited for treating superficial lesions, one important challenge is to find a way to effectively treat deep-seated tumours.

In a proof-of-concept treatment planning study, researchers in Switzerland demonstrated that a hybrid approach combining UHDR electron and CDR photon radiotherapy may achieve equivalent dosimetric effectiveness and quality to conventional radiotherapy, for the treatment of glioblastoma, pancreatic cancer and localized prostate cancer. The team, at Lausanne University Hospital and the University of Lausanne, report the findings in Radiotherapy and Oncology.

Combined device

This hybrid treatment could be facilitated using a linear accelerator (linac) with the capability to generate both UHDR electron beams and CDR photon beams. Such a radiotherapy device could eliminate concerns relating to the purchase, operational and maintenance costs of other proposed FLASH treatment devices. It would also overcome the logistical hurdles of needing to move patients between two separate radiotherapy treatment rooms and immobilize them identically twice.

For their study, the Lausanne team presumed that such a dual-use clinically approved linac exists. This linac would deliver a bulk radiation dose by a UHDR electron beam in a less conformal manner to achieve the FLASH effect, and then deliver conventional intensity-modulated radiation therapy (IMRT) or volumetric-modulated arc therapy (VMAT) to enhance dosimetric target coverage and conformity.

Principal investigator Till Böhlen and colleagues created a machine model that simulates 3D-conformal broad electron beams with a homogeneous parallel fluence. They developed treatments that deliver a single broad UHDR electron beam with case-dependent energy of between 20 and 250 MeV for every treatment fraction, together with a CDR VMAT to produce a conformal dose delivery to the planning target volume (PTV).

The tumours for each of the three cancer cases required simple, mostly round PTVs that could be covered by a single electron beam. Each plan’s goal was to deliver the majority of the dose per treatment with the UHDR electron beam, while achieving acceptable PTV coverage, homogeneity and sparing of critical organs-at-risk.

Plan comparisons

The researchers assessed the plan quality based on absorbed dose distribution, dose–volume histograms and dose metric comparisons with the CDR reference plans used for clinical treatments. In all cases, the hybrid plans exhibited comparable dosimetric quality to the clinical plans. They also evaluated dose metrics for the parts of the doses delivered by the UHDR electron beam and by the CDR VMAT, observing that the hybrid plans delivered the majority of the PTV dose, and large parts of doses to surrounding tissues, at UHDR.

“This study demonstrates that hybrid treatments combining an UHDR electron field with a CDR VMAT may provide dosimetrically conformal treatments for tumours with simple target shapes in various body sites and depths in the patient, while delivering the majority of the prescribed dose per fraction at UHDR without delivery pauses,” the researchers write.

In another part of the study, the researchers estimated the potential FLASH sparing effect achievable with their hybrid technique, using the glioblastoma case as an example. They assumed a FLASH normal tissue sparing scenario with an onset of FLASH sparing at a threshold dose of 11 Gy/fraction, and a more favourable scenario with sparing onset at 3 Gy/fraction. The treatment comprised a single-fraction 15 Gy UHDR electron boost, supplemented with 26 fractions of CDR VMAT. The two tested scenarios showed a FLASH sparing magnitude of 10% for the first scenario and more substantial 32% sparing of brain tissues of for the second.

“Following up on this pilot study focusing on feasibility, the team is currently working on improving the joint optimization of the UHDR and CDR dose components to further enhance plan quality, flexibility and UHDR proportion of the delivered dose using the [hybrid] treatment approach,” Böhlen tells Physics World. “Additional work focuses on quantifying its biological benefits and advancing its technical realization.”

The post Hybrid irradiation could facilitate clinical translation of FLASH radiotherapy appeared first on Physics World.

]]>
Research update A combination of ultrahigh-dose rate electron and conventional photon radiotherapy could enable FLASH treatments of deep-seated tumours https://physicsworld.com/wp-content/uploads/2024/11/14-11-24-hybrid-FLASH-fig3-featured.jpg newsletter
Trailblazer: astronaut Eileen Collins reflects on space, adventure, and the power of lifelong learning https://physicsworld.com/a/trailblazer-astronaut-eileen-collins-reflects-on-space-adventure-and-the-power-of-lifelong-learning/ Thu, 14 Nov 2024 09:39:06 +0000 https://physicsworld.com/?p=118150 Astronaut Eileen Collins and filmmaker Hannah Berryman discuss the new documentary SPACEWOMAN and the thrill of pushing human frontiers

The post Trailblazer: astronaut Eileen Collins reflects on space, adventure, and the power of lifelong learning appeared first on Physics World.

]]>
In this episode of Physics World Stories, astronaut Eileen Collins shares her extraordinary journey as the first woman to pilot and command a spacecraft. Collins broke barriers in space exploration, inspiring generations with her courage and commitment to discovery. Reflecting on her career, she discusses not only her time in space but also her lifelong sense of adventure and her recent passion for reading history books. Today, Collins frequently shares her experiences with audiences around the world, encouraging curiosity and inspiring others to pursue their dreams.

Joining the conversation is Hannah Berryman, director of the new documentary SPACEWOMAN, which is based on Collins’ memoir Through the Glass Ceiling to the Stars, co-written with Jonathan H Ward. The British filmmaker describes what attracted her to Collins’ story and the universal messages it reveals. Hosted by science communicator Andrew Glester, this episode offers a glimpse into the life of a true explorer – one whose spirit of adventure knows no bounds.

SPACEWOMAN has its world premiere on 16 November 2024 at DOC NYC. Keep an eye on the documentary’s website for details of how you can watch the film wherever you are.

The post Trailblazer: astronaut Eileen Collins reflects on space, adventure, and the power of lifelong learning appeared first on Physics World.

]]>
Astronaut Eileen Collins and filmmaker Hannah Berryman discuss the new documentary SPACEWOMAN and the thrill of pushing human frontiers Astronaut Eileen Collins and filmmaker Hannah Berryman discuss the new documentary SPACEWOMAN and the thrill of pushing human frontiers Physics World Trailblazer: astronaut Eileen Collins reflects on space, adventure, and the power of lifelong learning full false 38:55 Podcasts Astronaut Eileen Collins and filmmaker Hannah Berryman discuss the new documentary SPACEWOMAN and the thrill of pushing human frontiers https://physicsworld.com/wp-content/uploads/2024/11/Eileen_Collins_crop.png newsletter
Venkat Srinivasan: ‘Batteries are largely bipartisan’ https://physicsworld.com/a/venkat-srinivasan-batteries-are-largely-bipartisan/ Thu, 14 Nov 2024 03:05:23 +0000 https://physicsworld.com/?p=118152 Energy storage expert Venkat Srinivasan discusses the pros and cons of different battery technologies and the motivations people have for adopting them

The post Venkat Srinivasan: ‘Batteries are largely bipartisan’ appeared first on Physics World.

]]>
Which battery technologies are you focusing on at Argonne?

We work on everything. We work on lead-acid batteries, a technology that’s 100 years old, because the research community is saying, “If only we could solve this problem with cycle life in lead-acid batteries, we could use them for energy storage to add resilience to the electrical grid.” That’s an attractive prospect because lead-acid batteries are extremely cheap, and you can recycle them easily.

We work a lot on lithium-ion batteries, which is what you find in your electric car and your cell phone. The big challenge there is that lithium-ion batteries use nickel and cobalt, and while you can get nickel from a few places, most of the cobalt comes from the Democratic Republic of Congo, where there are safety and environmental concerns about exactly how that cobalt is being mined, and who is doing the mining. Then there’s lithium itself. The supply chain for lithium is concentrated in China, and we saw during COVID the problems that can cause. You have one disruption somewhere and the whole supply chain collapses.

We’re also looking at technologies beyond lithium-ion batteries. If you want to start using batteries for aviation, you need batteries with a long range, and for that you have to increase energy density. So we work on things like solid-state batteries.

Finally, we are working on what I would consider really “out there” technologies, where it might be 20 years before we see them used. Examples might be lithium-oxygen or lithium-sulphur batteries, but there’s also a move to go beyond lithium because of the supply chain issues I mentioned. One alternative might be to switch to sodium-based batteries. There’s a big supply of soda ash in the US, which is the raw material for sodium, and sodium batteries would allow us to eliminate cobalt while using very little nickel. If we can do that, the US can be completely reliant on its own domestic minerals and materials for batteries.

What are the challenges associated with these different technologies?

Frankly, every chemistry has its challenges, but I can give you an example.

If you look at the periodic table, the most electronegative element is lithium, while the most electropositive is fluorine. So you might think the ultimate battery would be lithium-fluorine. But in practice, nobody should be using fluorine – it’s super dangerous. The next best option is lithium-oxygen, which is nice because you can get oxygen from the air, although you have to purify it first. The energy density of a lithium-oxygen battery is comparable to that of gasoline, and that is why people have been trying to make solid-state lithium-metal batteries since before I was born.

Photo of Arturo Gutierrez and Venkat Srinivasan. Gutierrez is wearing safety glasses and a white lab coat and has his arms inside a glovebox while Srinivasan looks on

The problem is that when you charge a battery with a lithium metal anode, the electrolyte deposits on the lithium metal, and unfortunately it doesn’t create a thin, planar layer. Instead, it forms these needle-like structures called dendrites that short to the battery’s separator. Battery shorting is never a good thing.

Now, if you put a mechanically hard material next to the lithium metal, you can stop the dendrites from growing through. It’s like putting in a concrete wall next to the roots of a tree to stop the roots growing into the other side. But if you have a crack in your concrete wall, the roots will find a way – they will actually crack the concrete – and exactly the same thing happens with dendrites.

So the question becomes, “Can we make a defect-free electrolyte that will stop the dendrites?” Companies have taken a shot at this, and on the small scale, things look great: if you’re making one or two devices, you can have incredible control. But in a large-format manufacturing setup where you’re trying to make hundreds of devices per second, even a single defect can come back to bite you. Going from the lab scale to the manufacturing scale is such a challenge.

What are the major goals in battery research right now?

It depends on the application. For electric cars, we still have to get the cost down, and my sense is that we’ll ultimately need batteries that charge in five minutes because that’s how long it takes to refuel a gasoline-powered car. I worry about safety, too, and of course there’s the supply-chain issue I mentioned.

But if you forget about supply chains for a second, I think if we can get fast charging with incredibly safe batteries while reducing the cost by a factor of two, we are golden. We’ll be able to do all sorts of things.

A researcher holding a plug kneels next to an electric car. The car has a sign on the front door that reads "Argonne research vehicle"

For aviation, it’s a different story. We think the targets are anywhere from increasing energy density by a factor of two for the air taxi market, all the way to a factor of six if you want an electric 737 that can fly from Chicago to Washington, DC with 75 passengers. That’s kind of hard. It may be impossible. You can go for a hybrid design, in which case you will not need as much energy density, but you need a lot of power density because even when you’re landing, you still have to defy gravity. That means you need power even when the vehicle is in its lowest state of charge.

The political landscape in the US is shifting as the Biden administration, which has been very focused on clean energy, makes way for a second presidential term for Donald Trump, who is not interested in reducing carbon emissions. How do you see that impacting battery research?

If you look at this question historically, ReCell, which is Argonne’s R&D centre for battery recycling, got established during the first Trump administration. Around the same time, we got the Federal Consortium for Advanced Batteries, which brought together the Department of Energy, the Department of Defense, the intelligence community, the State Department and the Department of Commerce. The reason all those groups were interested in batteries is that there’s a growing feeling that we need to have energy independence in the US when it comes to supply chains for batteries. It’s an important technology, there’s lots of innovations, and we need to find a way to move them to market.

So that came about during the Trump administration, and then the Biden administration doubled down on it. What that tells me is that batteries are largely bipartisan, and I think that’s at least partly because you can have different motivations for buying them. Many of my neighbours aren’t particularly thinking about carbon emissions when they buy an electric vehicle (EV). They just want to go from zero to 60 in three seconds. They love the experience. Similarly, people love to be off-grid, because they feel like they’re controlling their own stuff. I suspect that because of this, there will continue to be largely bipartisan support for EVs. I remain hopeful that that’s what will happen.

  • Venkat Srinivasan will appear alongside William Mustain and Martin Freer at a Physics World Live panel discussion on battery technologies on 21 November 2024. Sign up here.

The post Venkat Srinivasan: ‘Batteries are largely bipartisan’ appeared first on Physics World.

]]>
Interview Energy storage expert Venkat Srinivasan discusses the pros and cons of different battery technologies and the motivations people have for adopting them https://physicsworld.com/wp-content/uploads/2024/11/Srinivasan-head-shot-smallest-size-scaled.jpg newsletter
UK plans £22bn splurge on carbon capture and storage https://physicsworld.com/a/uk-plans-22bn-splurge-on-carbon-capture-and-storage/ Wed, 13 Nov 2024 11:08:17 +0000 https://physicsworld.com/?p=118154 Despite the move, questions remain over the technology's feasibility

The post UK plans £22bn splurge on carbon capture and storage appeared first on Physics World.

]]>
Further details have emerged over the UK government’s pledge to spend almost £22bn on carbon capture and storage (CCS) in the next 25 years. While some climate scientists feel the money is vital to decarbonise heavy industry, others have raised concerns about the technology itself, including its feasibility at scale and potential to extend fossil fuel use rather than expanding renewable energy and other low-carbon technologies.

In 2023 the UK emitted about 380 million tonnes of carbon dioxide equivalent and the government claims that CCS could remove more than 8.5 million tonnes each year as part of its effort to be net-zero by 2050. Although there are currently no commercial CCS facilities in the UK, last year the previous Conservative government announced funding for two industrial clusters: HyNet in Merseyside and the East Coast Cluster in Teesside.

Projects at both clusters will capture carbon dioxide from various industrial sites, including hydrogen plants, a waste incinerator, a gas-fired power station and a cement works. The gas will then be transported down pipes to offshore storage sites, such as depleted oil and gas fields. According to the new Labour government, the plans will create 4000 jobs, with the wider CCS industry potentially supporting 50,000 roles.

Government ministers claim the strategy will make the UK a global leader in CCS and hydrogen production and is expected to attract £8bn in private investment. Rachel Reeves, the chancellor, said in September that CCS is a “game-changing technology” that will “ignite growth”. The Conservative’s strategy also included plans to set up two other clusters but no progress has been made on these yet.

The new investment in CCS comes after advice from the independent Climate Change Committee, which said it is necessary for decarbonising the UK’s heavy industry and for the UK to reach its net-zero target. The International Energy Agency (IEA) and the Intergovernmental Panel on Climate Change have also endorsed CCS as critical for decarbonisation, particularly in heavy industry.

“The world is going to generate more carbon dioxide from burning fossil fuels than we can afford to dump into the atmosphere,” says Myles Allen, a climatologist at the University of Oxford. “It is utterly unrealistic to pretend otherwise. So, we need to scale up a massive global carbon dioxide disposal industry.” Allen adds, however, that discussions are needed about how CCS is funded. “It doesn’t make sense for private companies to make massive profits selling fossil fuels while taxpayers pay to clean up the mess.”

Out of options

Globally there are around 45 commercial facilities that capture about 50 million tonnes of carbon annually, roughly 0.14% of global emissions. According to the IEA, up to 435 million tonnes of carbon could be captured every year by 2030, depending on the progress of more than 700 announced CCS projects.

One key part of the UK government’s plans is to use CCS to produce so-called “blue” hydrogen. Most hydrogen is currently made by heating methane from natural gas with a catalyst, producing carbon monoxide and carbon dioxide as by-products. Blue hydrogen involves capturing and storing those by-products, thereby cutting carbon emissions.

But critics warn that blue hydrogen continues our reliance on fossil fuels and risks leaks along the natural gas supply chain. There are also concerns about its commercial feasibility. The Norwegian energy firm Equinor, which is set to build several UK-based hydrogen plants, has recently abandoned plans to pipe blue hydrogen to Germany, citing cost and lack of demand.

“The hydrogen pipeline hasn’t proved to be viable,” Equinor spokesperson Magnus Frantzen Eidsvold told Reuters, adding that its plans to produce hydrogen had been “put aside”. Shell has also scrapped plans for a blue hydrogen plant in Norway, saying that the market for the fuel had failed to materialise.

To meet our climate targets, we do face difficult choices. There is no easy way to get there

Jessica Jewell

According to the Institute for Energy Economics and Financial Analysis (IEEFA), CCS “is costly, complex and risky with a history of underperformance and delays”. It believes that money earmarked for CCS would be better spent on proven decarbonisation technologies such as buildings insulation, renewable power, heat pumps and electric vehicles. It says the UK’s plans will make it “more reliant on fossil gas imports” and send “the wrong signal internationally about the need to stop expanding fossil fuel infrastructure”.

After delays to several CCS projects in the EU, there are also questions around progress on its target to store 50 million tonnes of carbon by 2030. Press reports, have recently revealed, for example, that a pipeline connecting Germany’s Rhine-Ruhr industrial heartland to a Dutch undersea carbon storage project will not come online until at least 2032.

Jessica Jewell, an energy expert at Chalmers University in Sweden, and colleagues have also found that CCS plants have a failure rate of about 90% largely because of poor investment prospects (Nature Climate Change 14 1047). “If we want CCS to expand and be taken more seriously, we have to make projects more profitable and make the financial picture work for investors,” Jewell told Physics World.

Subsidies like the UK plan could do so, she says, pointing out that wind power, for example, initially benefited from government support to bring costs down. Jewell’s research suggests that by cutting failure rates and enabling CCS to grow at the pace wind power did in the 2000s, it could capture a “not insignificant” 600 gigatonnes of carbon dioxide by 2100, which could help decarbonise heavy industry.

That view is echoed by Marcelle McManus, director of the Centre for Sustainable Energy Systems at the University of Bath, who says that decarbonising major industries such as cement, steel and chemicals is challenging and will benefit from CCS. “We are in a crisis and need all of the options available,” she says. “We don’t currently have enough renewable electricity to meet our needs, and some industrial processes are very hard to electrify.”

Although McManus admits we need “some storage of carbon”, she says it is vital to “create the pathways and technologies for a defossilised future”. CCS alone is not the answer and that, says Jewell, means rapidly expanding low carbon technologies like wind, solar and electric vehicles. “To meet our climate targets, we do face difficult choices. There is no easy way to get there.”

The post UK plans £22bn splurge on carbon capture and storage appeared first on Physics World.

]]>
Analysis Despite the move, questions remain over the technology's feasibility https://physicsworld.com/wp-content/uploads/2024/11/oil-refineries-Texas-766204174-shutterstock_Roschetzky-Photography-.jpg newsletter1
From melanoma to malaria: photoacoustic device detects disease without taking a single drop of blood https://physicsworld.com/a/from-melanoma-to-malaria-photoacoustic-device-detects-disease-without-taking-a-single-drop-of-blood/ Tue, 12 Nov 2024 09:30:08 +0000 https://physicsworld.com/?p=118135 New diagnostic test provides safe and sensitive detection of malaria infection by interrogating the blood through intact skin

The post From melanoma to malaria: photoacoustic device detects disease without taking a single drop of blood appeared first on Physics World.

]]>
Malaria remains a serious health concern, with annual deaths increasing yearly since 2019 and almost half of the world’s population at risk of infection. Existing diagnostic tests are less than optimal and all rely on obtaining an invasive blood sample. Now, a research collaboration from USA and Cameroon has demonstrated a device that can non-invasively detect this potentially deadly infection without requiring a single drop of blood.

Currently, malaria is diagnosed using optical microscopy or antigen-based rapid diagnostic tests, but both methods have low sensitivity. Polymerase chain reaction (PCR) tests are more sensitive, but still require blood sampling. The new platform – Cytophone – uses photoacoustic flow cytometry (PAFC) to rapidly identify malaria-infected red blood cells via a small probe placed on the back of the hand.

PAFC works by delivering low-energy laser pulses through the skin into a blood vessel and recording the thermoacoustic signals generated by absorbers in circulating blood. Cytophone, invented by Vladimir Zharov from the University of Arkansas for Medical Science, was originally developed as a universal diagnostic platform and first tested clinically for detection of cancerous melanoma cells.

“We selected melanoma because of the possibility of performing label-free detection of circulating cells using melanin as an endogenous biomarker,” explains Zharov. “This avoids the need for in vivo labelling by injecting contrast agents into blood.” For malaria diagnosis, Cytophone detects haemozoin, an iron crystal that accumulates in red blood cells infected with malaria parasites. These haemozoin biocrystals have unique magnetic and optical properties, making them a potential diagnostic target.

Photoacoustic detection

“The similarity between melanin and haemozoin biomarkers, especially the high photoacoustic contrast above the blood background, motivated us to bring a label-free malaria test with no blood drawing to malaria-endemic areas,” Zharov tells Physics World. “To build a clinical prototype for the Cameroon study we used a similar platform and just selected a smaller laser to make the device more portable.”

The Cytophone prototype uses a 1064 nm laser with a linear beam shape and a high pulse rate to interrogate fast moving blood cells within blood vessels. Haemozoin nanocrystals in infected red blood cells absorb this light (more strongly than haemoglobin in normal red blood cells), heat up and expand, generating acoustic waves. These signals are detected by an array of 16 tiny ultrasound transducers in acoustic contact with the skin. The transducers have focal volumes oriented in a line across the vessel, which increases sensitivity and resolution, and simplifies probe navigation.

In vivo testing

Zharov and collaborators – also from Yale School of Public Health and the University of Yaoundé I – tested the Cytophone in 30 Cameroonian adults diagnosed with uncomplicated malaria. They used data from 10 patients to optimize device performance and assess safety. They then performed a longitudinal study in the other 20 patients, who attended four or five times at up to 37 days following antimalarial therapy, contributing 94 visits in total.

Photoacoustic waveforms and traces from infected blood cells have a particular shape and duration, and a different time delay to that of background skin signals. The team used these features to optimize signal processing algorithms with appropriate averaging, filtration and gating to identify true signals arising from infected red blood cells. As the study subjects all had dark skin with high melanin content, this time-resolved detection also helped to avoid interference from skin melanin.

On visit 1 (the day of diagnosis), 19/20 patients had detectable photoacoustic signals. Following treatment, these signals consistently decreased with each visit. Cytophone-positive samples exhibited median photoacoustic peak rates of 1.73, 1.63, 1.18 and 0.74 peaks/min on visits 1–4, respectively. One participant had a positive signal on visit 5 (day 30). The results confirm that Cytophone is sensitive enough to detect low levels of parasites in infected blood.

The researchers note that Cytophone detected the most common and deadliest species of malaria parasite, as well as one infection by a less common species and two mixed infections. “That was a really exciting proof-of-concept with the first generation of this platform,” says co-lead author Sunil Parikh in a press statement. “I think one key part of the next phase is going to involve demonstrating whether or not the device can detect and distinguish between species.”

The research team

Performance comparison

Compared with invasive microscopy-based detection, Cytophone demonstrated 95% sensitivity at the first visit and 90% sensitivity during the follow-up period, with 69% specificity and an area under the ROC curve of 0.84, suggesting excellent diagnostic performance. Cytophone also approached the diagnostic performance of standard PCR tests, with scope for further improvement.

Staff required just 4–6 h of training to operate Cytophone, plus a few days experience to achieve optimal probe placement. And with minimal consumables required and the increasing affordability of lasers, the researchers estimate that the cost per malaria diagnosis will be low. The study also confirmed that the safety of the Cytophone device. “Cytophone has the potential to be a breakthrough device allowing for non-invasive, rapid, label-free and safe in vivo diagnosis of malaria,” they conclude.

The researchers are now performing further malaria-related clinical studies focusing on asymptomatic individuals and children (for whom the needle-free aspect is particularly important). Simultaneously, they are continuing melanoma trials to detect early-stage disease and investigating the use of Cytophone to detect circulating blood clots in stroke patients.

“We are integrating multiple innovations to further enhance Cytophone’s sensitivity and specificity,” says Zharov. “We are also developing a cost-effective wearable Cytophone for continuous monitoring of disease progression and early warning of the risk of deadly disease.”

The study is described in Nature Communications.

The post From melanoma to malaria: photoacoustic device detects disease without taking a single drop of blood appeared first on Physics World.

]]>
Research update New diagnostic test provides safe and sensitive detection of malaria infection by interrogating the blood through intact skin https://physicsworld.com/wp-content/uploads/2024/11/12-11-24-malaria-detection-fig2a.jpg newsletter1
Quantized vortices seen in a supersolid for the first time https://physicsworld.com/a/quantized-vortices-seen-in-a-supersolid-for-the-first-time/ Mon, 11 Nov 2024 14:27:08 +0000 https://physicsworld.com/?p=118121 Further research could shed light on neutron-star glitches

The post Quantized vortices seen in a supersolid for the first time appeared first on Physics World.

]]>
Quantized vortices – one of the defining features of superfluidity – have been seen in a supersolid for the first time. Observed by researchers in Austria, these vortices provide further confirmation that supersolids can be modelled as superfluids with a crystalline structure. This model could have variety of other applications in quantum many body physics and Austrian team now using it to study pulsars, which are rotating and magnetized neutron stars.

A superfluid is a curious state of matter that can flow without any friction. Superfluid systems that have been studied in the lab include helium-4; type-II superconductors; and Bose–Einstein condensates (BECs) – all of which exist at very low temperatures.

More than five decades ago, physicists suggested that some systems could exhibit crystalline order and superfluidity simultaneously in a unique state of matter called a supersolid. In such a state, the atoms would be described by the same wavefunction and are therefore delocalized across the entire crystal lattice. The order of the supersolid would therefore be defined by the nodes and antinodes of this wavefunction.

In 2004, Moses Chan of the Pennsylvania State University in the US and his PhD student Eun-Seong Kim reported observing a supersolid phase in superfluid helium-4. However, Chan and others have not been able to reproduce this result. Subsequently, researchers including Giovanni Modugno at Italy’s University of Pisa and Francesca Ferlaino at the University of Innsbruck in Austria have demonstrated evidence of supersolidity in BECs of magnetic atoms.

Irrotational behaviour

But until now, no-one had observed an important aspect of superfluidity in a supersolid: that a superfluid never carries bulk angular momentum. If a superfluid is placed in a container and the container is rotated at moderate angular velocity, it simply flows freely against the edges. As the angular momentum of the container increases, however, it becomes energetically costly to maintain the decoupling between the container and the superfluid. “Still, globally, the system is irrotational,” says Ferlaino; “So there’s really a necessity for the superfluid to heal itself from rotation.”

In a normal superfluid, this “healing” occurs by the formation of small, quantized vortices that dissipate the angular momentum, allowing the system to remain globally irrotational. “In an ordinary superfluid that’s not modulated in space [the vortices] form a kind of triangular structure called an Abrikosov lattice, because that’s the structure that minimizes their energy,” explains Ferlaino. It was unclear how the vortices might sit inside a supersolid lattice.

In the new work, Ferlaino and colleagues at the University of Innsbruck utilized a technique called magnetostirring to rotate a BEC of magnetic dysprosium-164 atoms. They caused the atoms to rotate simply by rotating the magnetic field. “That’s the beauty: it’s so simple but nobody had thought about this before,” says Ferlaino.

As the group increased the field’s rotation rate, they observed vortices forming in the condensate and migrating to the density minima. “Vortices are zeroes of density, so there it costs less energy to drill a hole than in a density peak,” says Ferlaino; “The order that the vortices assume is largely imparted by the crystalline structure – although their distance is dependent on the repulsion between vortices.”

Unexpected applications

The researchers believe the findings could be applicable in some unexpected areas of physics. Ferlaino tells of hearing a talk about the interior composition of neutron stars by the theoretical astrophysicist Massimo Mannarelli of Gran Sasso Laboratory in Italy. “During the coffee break I went to speak to him and we’ve started to work together.”

“A large part of the astrophysical community is convinced that the core of a neutron star is a superfluid,” Ferlaino says; “The crust is a solid, the core is a superfluid, and a layer called the inner crust has both properties together.” Pulsars are neutron stars that emit radiation in a narrow beam, giving them a well-defined pulse rate that depends on their rotation. As they lose energy through radiation emission, they gradually slow down.

Occasionally, however, their rotation rates suddenly speed up again in events called glitches. The researchers’ theoretical models suggest that the glitches could be caused by vortices unpinning from the supersolid and crashing into the solid exterior, imparting extra angular momentum. “When we impose a rotation on our supersolid that slows down, then at some point the vortices unpin and we see the glitches in the rotational frequency,” Ferlaino says. “This is a new direction – I don’t know where it will bring us, but for sure experimentally observing vortices was the first step.”

Theorist Blair Blakie of the University of Otago in New Zealand is excited by the research. “Vortices in supersolids were a bit of a curiosity in early theories, and sometimes you’re not sure whether theorists are just being a bit crazy considering things, but now they’re here,” he says. “It opens this new landscape for studying things from non-equilibrium dynamics to turbulence – all sorts of things where you’ve got this exotic material with topological defects in it. It’s very hard to predict what the killer application will be, but in these fields people love new systems with new properties.”

The research is described in Nature.

The post Quantized vortices seen in a supersolid for the first time appeared first on Physics World.

]]>
Research update Further research could shed light on neutron-star glitches https://physicsworld.com/wp-content/uploads/2024/11/11-11-2024-supersolid-vortices.jpg newsletter1
Sceptical space settlers, Einstein in England, trials of the JWST, tackling quantum fundamentals: micro reviews of the best recent books https://physicsworld.com/a/sceptical-space-settlers-einstein-in-england-trials-of-the-jwst-tackling-quantum-fundamentals-micro-reviews-of-the-best-recent-books/ Mon, 11 Nov 2024 11:00:53 +0000 https://physicsworld.com/?p=117748 Condensed natter: Physics World editors give their compressed verdicts on top new books

The post Sceptical space settlers, Einstein in England, trials of the JWST, tackling quantum fundamentals: micro reviews of the best recent books appeared first on Physics World.

]]>
A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?
By Kelly and Zach Weinersmith

Husband-and-wife writing team Kelly and Zach Weinersmith were excited about human settlements in space when they started research for their new book A City on Mars. But the more they learned, the more sceptical they became. From technology, practicalities and ethics, to politics and the legal framework, they uncovered profound problems at every step. With humorous panache and plenty of small cartoons by Zach, who also does the webcomic Saturday Morning Breakfast Cereal, the book is a highly entertaining guide that will dent the enthusiasm of most proponents of settling space. Kate Gardner

  • 2024 Particular Books

Einstein in Oxford
By Andrew Robinson

“England has always produced the best physicists,” Albert Einstein once said in Berlin in 1925. His high regard for British physics led him to pay three visits to the University of Oxford in the early 1930s, which are described by Andrew Robinson in his charming short book Einstein in Oxford. Sadly, the visits were not hugely productive for Einstein, who disliked the formality of Oxford life. His time there is best remembered for the famous blackboard – saved for posterity – on which he’d written while giving a public lecture. Matin Durrani

  • 2024 Bodleian Library Publishing

Pillars of Creation: How the James Webb Telescope Unlocked the Secrets of the Cosmos
By Richard Panek

The history of science is “a combination of two tales” says Richard Panek in his new book charting the story of the James Webb Space Telescope (JWST). “One is a tale of curiosity. The other is a tale of tools.” He has chosen an excellent case study for this statement. Pillars of Creation combines the story of the technological and political hurdles that nearly sank the JWST before it launched with a detailed account of its key scientific contributions. Panek’s style is also multi-faceted, mixing technical explanations with the personal stories of scientists fighting to push the frontiers of astronomy.  Katherine Skipper

  • 2024 Little, Brown

Quanta and Fields: the Biggest Ideas in the Universe
By Sean Carroll

With 2025 being the International Year of Quantum Science and Technology, the second book in prolific science writer Sean Carroll’s “Biggest Ideas” trilogyQuanta and Fields – might make for a prudent read. Following the first volume on “space, time and motion”, it tackles the key scientific principles that govern quantum mechanics, from wave functions to effective wave theory. But beware: this book is packed with equations, formulae and technical concepts. It’s essentially a popular-science textbook, in which Carroll does things like examine each term in the Schrödinger equation and delve into the framework for group theory. Great for physicists but not, perhaps, for the more casual reader. Tushna Commissariat

  • 2024 Penguin Random House

The post Sceptical space settlers, Einstein in England, trials of the JWST, tackling quantum fundamentals: micro reviews of the best recent books appeared first on Physics World.

]]>
Opinion and reviews Condensed natter: Physics World editors give their compressed verdicts on top new books https://physicsworld.com/wp-content/uploads/2024/11/2024-11-condensednatter-Mars-settlement-colony-1433158079-Shutterstock_Dotted-Yeti.jpg newsletter
Four-wave mixing could boost optical communications in space https://physicsworld.com/a/four-wave-mixing-could-boost-optical-communications-in-space/ Sat, 09 Nov 2024 15:02:38 +0000 https://physicsworld.com/?p=118113 Nonlinear effect amplifies weak signals

The post Four-wave mixing could boost optical communications in space appeared first on Physics World.

]]>
A new and practical approach to the low-noise amplification of weakened optical signals has been unveiled by researchers in Sweden. Drawing from the principles of four-wave mixing, Rasmus Larsson and colleagues at Chalmers University of Technology believe their approach could have promising implications for laser-based communication systems in space.

Until recently, space-based communication systems have largely relied on radio waves to transmit signals. Increasingly, however, these systems are being replaced with optical laser beams. The shorter wavelengths of these signals offer numerous advantages over radio waves. These include higher data transmission rates; lower power requirements; and lower risks of interception.

However, when transmitted across the vast distances of space, even a tightly focused laser beam will spread out significantly by the time its light reaches its destination. This will weaken severely the signal’s strength.

To deal with this loss, receivers must be extremely sensitive to incoming signals. This involves the preamplification of the signal above the level of electronic noise in the receiver. But conventional optical amplifiers are far too noisy to achieve practical space-based communications.

Phase-sensitive amplification

In a 2021 study, Larsson’s team showed how these weak signals can, in theory, be amplified with zero noise using a phase-sensitive optical parametric amplifier (PSA). However, this approach did not solve the problem entirely.

“The PSA should be the ideal preamplifier for optical receivers,” Larsson explains. “However, we don’t see them in practice due to their complex implementation requirements, where several synchronized optical waves of different frequencies are needed to facilitate the amplification.” These cumbersome requirements place significant demands on both transmitter and receiver, which limits their use in space-based communications.

To simplify preamplification, Larsson’s team used four-wave mixing. Here, the interaction between light at three different wavelengths within a nonlinear medium produces light at a fourth wavelength.

In this case, a weakened transmitted signal is mixed with two strong “pump” waves that are generated within the receiver. When the phases of the signal and pump are synchronized inside a doped optical fibre, light at the fourth wavelength interferes constructively with the signal. This boosts the amplitude of the signal without sacrificing low-noise performance.

Auxiliary waves

“This allows us to generate all required auxiliary waves in the receiver, with the transmitter only having to generate the signal wave,” Larsson describes. “This is contrary to the case before where most, if not all waves were generated in the transmitter. The synchronization of the waves further uses the same specific lossless approach we demonstrated in 2021.”

The team says that this new approach offers a practical route to noiseless amplification within an optical receiver. “After optimizing the system, we were able to demonstrate the low-noise performance and a receiver sensitivity of 0.9 photons per bit,” Larsson explains. This amount of light is the minimum needed to reliably decode each bit of data and Larsson adds, “This is the lowest sensitivity achieved to date for any coherent modulation format.”

This unprecedented sensitivity enabled the team to establish optical communication links between a PSA-amplified receiver and a conventional, single-wave transmitter. With a clear route to noiseless preamplification through some further improvements, the researchers are now hopeful that their approach could open up new possibilities across a wide array of applications – especially for laser-based communications in space.

“In this rapidly emerging topic, the PSA we have demonstrated can facilitate much higher data rates than the bandwidth-limited single photon detection technology currently considered.”

This ability would make the team’s PSA ideally suited for communication links between space-based transmitters and ground-based receivers. In turn, astronomers could finally break the notorious “science return bottleneck”. This would remove many current restrictions on the speed and quantity of data that can be transmitted by satellites, probes, and telescopes scattered across the solar system.

The research is described in Optica.

The post Four-wave mixing could boost optical communications in space appeared first on Physics World.

]]>
Research update Nonlinear effect amplifies weak signals https://physicsworld.com/wp-content/uploads/2024/11/9-11-2024-optical-receiver.jpg newsletter1
The Arecibo Observatory’s ‘powerful radiation environment’ led to its collapse, claims report https://physicsworld.com/a/the-arecibo-observatorys-powerful-radiation-environment-led-to-its-collapse-claims-report/ Fri, 08 Nov 2024 13:40:06 +0000 https://physicsworld.com/?p=118107 The disaster in December 2020 was caused by the failure of zinc in the observatory’s cable sockets

The post The Arecibo Observatory’s ‘powerful radiation environment’ led to its collapse, claims report appeared first on Physics World.

]]>

The Arecibo Observatory’s “uniquely powerful electromagnetic radiation environment” is the most likely initial cause of its destruction and collapse in December 2020. That’s according to a new report by the National Academies of Sciences, Engineering, and Medicine, which states that failure of zinc in the cables that held the telescope’s main platform led to it falling onto the huge 305 m reflector dish – causing catastrophic damage.

While previous studies of the iconic telescope’s collapse had identified the deformation of zinc inside the cable sockets, other reasons were also put forward. They included poor workmanship and the effects of hurricane Maria, which hit the area in 2017. It subjected the telescope’s cables to the highest structural stress they had ever endured since the instrument opened in 1963.

Inspections after the hurricane showed some evidence of cable slippage. Yet these investigations, the report says, failed to note several failure patterns and did not provide plausible explanations for most of them. In addition, photos taken in 2019 gave “a clear indication of major socket deterioration”, but no further investigation followed.

The eight-strong committee, chaired by Roger McCarthy of the US firm McCarthy Engineering, that wrote the report found that move surprising. “The lack of documented concern from the contracted engineers about the inconsequentiality of cable pullouts or the safety factors between Hurricane Maria in 2017 and the failure is alarming,” they say.

Further research

The report concludes that the root cause of the catastrophe was linked to the zinc sockets, which suffered “unprecedented and accelerated long-term creep-induced failure”. Metallic creep – the slow, permanent deformation of a metal – is caused by stress and exacerbated by heat, making components based on the metal to fail. “Each failure involved both the rupture of some of the cable’s wires and a deformation of the socket’s zinc, and is therefore the failure of a cable-socket assembly,” the report notes.

As to the cause of the creep, the committee sees the telescope’s radiation environment as “the only hypothesis that…provides a plausible but unprovable answer”. The committee proposes that the telescope’s powerful transmitters induced electrical currents in the cables and sockets, potentially causing “long-term, low-current electroplasticity” in the zinc. The increased induced plasticity accelerated the natural ongoing creep in the zinc.

The report adds that the collapse of the platform is the first documented zinc-induced creep failure, despite the metal being used in such a way for over a century. The committee now recommends that the National Science Foundation (NSF), which oversees Arecibo, offer the remaining socket and cable sections to the research community for further analysis on the “large-diameter wire connections, the long-term creep behavior of zinc spelter connections, and [the] materials science”.

  • Meanwhile, the NSF had planned to reopen the telescope site as an educational center later this month but that has now be delayed until next year to coincide with the NSF’s 75th anniversary.

The post The Arecibo Observatory’s ‘powerful radiation environment’ led to its collapse, claims report appeared first on Physics World.

]]>
News The disaster in December 2020 was caused by the failure of zinc in the observatory’s cable sockets https://physicsworld.com/wp-content/uploads/2023/10/UCF-Arecibo-aerial.jpg newsletter
Top-cited author Vaidehi Paliya discusses the importance of citations and awards https://physicsworld.com/a/top-cited-author-vaidehi-paliya-discusses-the-importance-of-citations-and-awards/ Fri, 08 Nov 2024 09:21:27 +0000 https://physicsworld.com/?p=117879 Paliya explains why it is essential for researchers to know about prizes such as the IOP highly cited paper award

The post Top-cited author Vaidehi Paliya discusses the importance of citations and awards appeared first on Physics World.

]]>
More than 50 papers from India have been recognized with a top-cited paper award for 2024 from IOP Publishing, which publishes Physics World. The prize is given to corresponding authors who have papers published in both IOP Publishing and its partners’ journals from 2021 to 2023 that are in the top 1% of the most cited papers.

The winners include astrophysicist Vaidehi Paliya from Inter-University Centre for Astronomy and Astrophysics (IUCAA) and colleagues. Their work involved studying the properties of the “central engines” of blazars, a type of active galactic nucleus.

Vaidehi Paliya

“Knowing that the astronomy community has appreciated the published research is excellent,” says Vaidehi. “It has been postulated for a long time that the physics of relativistic jets is governed by the central supermassive black hole and accretion disk, also known as the central engine of an active galaxy. Our work is probably the first to quantify their physical properties, such as the black hole mass and the accretion disk luminosity, for a large sample of active galaxies hosting powerful relativistic jets called blazars.”

Vaidehi explains that getting many citations for the work, which was published in Astrophysical Journal Supplement Series, indicates that the published results “have been helpful to other researchers” and that this broad visibility also increases the chance that other groups will come across the work. “[Citations] are important because they can therefore trigger innovative ideas and follow-up research critical to advancing scientific knowledge,” adds Vaidehi.

Vaidehi says that he often turns to highly cited research “to appreciate the genuine ideas put forward by scientists”, with two recent examples being what inspired him to work on the central engine problem.

Indeed, Vaidehi says that prizes such as IOP’s highly cited paper award are essential for researchers, especially students. “Highly cited work is crucial not only to win awards but also for the career growth of a researcher. Awards play a significant role in further motivating fellow researchers to achieve even higher goals and highlight the importance of innovation,” he says. “Such awards are definitely a highlight in getting a career promotion. The news of the award may also lead to opportunities. For instance, to be invited to join other researchers working in similar areas, which will provide an ideal platform for future collaboration and research exploration.”

Vaidehi adds that results that are meaningful to broader research areas will likely result in higher citations. “Bringing innovation to the work is the key to success,” he says. “Prestigious awards, high citation counts, and other forms of success and recognition will automatically follow. You will be remembered by the community only for your contribution to its advancement and growth, so be genuine.”

  • For the full list of top-cited papers from India for 2024, see here.

The post Top-cited author Vaidehi Paliya discusses the importance of citations and awards appeared first on Physics World.

]]>
Blog Paliya explains why it is essential for researchers to know about prizes such as the IOP highly cited paper award https://physicsworld.com/wp-content/uploads/2024/11/journals-or-magazines-868704370-iStock_carloscastilla.jpg
How to boost the sustainability of solar cells https://physicsworld.com/a/how-to-boost-the-sustainability-of-solar-cells/ Thu, 07 Nov 2024 15:00:30 +0000 https://physicsworld.com/?p=118074 Roadmap authors look to the future of photovoltaic technologies in this podcast

The post How to boost the sustainability of solar cells appeared first on Physics World.

]]>
In this episode of the Physics World Weekly podcast I explore routes to more sustainable solar energy. My guests are four researchers at the UK’s University of Oxford who have co-authored the “Roadmap on established and emerging photovoltaics for sustainable energy conversion”.

They are the chemist Robert Hoye; the physicists Nakita Noel and Pascal Kaienburg; and the materials scientist Sebastian Bonilla. We define what sustainability means in the context of photovoltaics and we look at the challenges and opportunities for making sustainable solar cells using silicon, perovskites, organic semiconductors and other materials.

This podcast is supported by Pfeiffer Vacuum+Fab Solutions.

Pfeiffer is part of the Busch Group, one of the world’s largest manufacturers of vacuum pumps, vacuum systems, blowers, compressors and gas abatement systems. Explore its products at the Pfeiffer website.

 

The post How to boost the sustainability of solar cells appeared first on Physics World.

]]>
Podcasts Roadmap authors look to the future of photovoltaic technologies in this podcast https://physicsworld.com/wp-content/uploads/2024/11/7-11-2024-Oxford-solar-cell-researchers-list.jpg newsletter1
Lightning sets off bursts of high-energy electrons in Earth’s inner radiation belt https://physicsworld.com/a/lightning-sets-off-bursts-of-high-energy-electrons-in-earths-inner-radiation-belt/ Thu, 07 Nov 2024 13:00:51 +0000 https://physicsworld.com/?p=118063 Unexpected finding could help determine the safest times to launch spacecraft

The post Lightning sets off bursts of high-energy electrons in Earth’s inner radiation belt appeared first on Physics World.

]]>
A supposedly stable belt of radiation 7000 km above the Earth’s surface may in fact be producing damaging bursts of high-energy electrons. According to scientists at the University of Colorado Boulder, US, the bursts appear to be triggered by lightning, and understanding them could help determine the safest “windows” for launching spacecraft – especially those with a human cargo.

The Earth is surrounded by two doughnut-shaped radiation belts that lie within our planet’s magnetosphere. While both belts contain high concentrations of energetic electrons, the electrons in the outer belt (which starts from about 4 Earth radii above the Earth’s surface and extends to about 9–10 Earth radii) typically have energies in the MeV range. In contrast, electrons in the inner belt, which is located between about 1.1 and 2 Earth radii, have energies between 10 and a few hundred kilo-electronvolts (KeV).

At the higher end of this energy scale, these electrons easily penetrate the walls of spacecraft and can damage sensitive electronics inside. They also pose risks to astronauts who leave the protective environment of their spacecraft to perform extravehicular activities.

The size of the radiation belts, as well as the energy and number of electrons they contain, varies considerably over time. One cause of these variations is sub-second bursts of energetic electrons that enter the atmosphere from the magnetosphere that surrounds it. These rapid microbursts are most commonly seen in the outer radiation belt, where they are the result of interactions with phenomena called whistler mode chorus radio waves. However, they can also be observed in the inner belt, where they are generated by whistlers produced by lightning storms. Such lightening-induced precipitation, as it is known, typically occurs at low energies of 10s to 100 KeV.

Outer-belt energies in inner-belt electrons

In the new study, researchers led by CU Boulder aerospace engineering student Max Feinland observed clumps of electrons with MeV energies in the inner belt for the first time. This serendipitous discovery came while Feinland was analysing data from a now-decommissioned NASA satellite called the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX). He originally intended to focus on outer-belt electrons, but “after stumbling across these events in the inner belt, we thought they were interesting and decided to investigate further,” he tells Physics World.

After careful analysis, Feinland, who was working as an undergraduate research assistant in Lauren Blum’s team at CU Boulder’s Laboratory for Atmospheric and Space Physics at the time, identified 45 bursts of high-energy electrons in the inner belt in data from 1996 to 2006. At first, he and his colleagues weren’t sure what could be causing them, since the chorus waves known to produce such high-energy bursts are generally an outer-belt phenomenon. “We actually hypothesized a number of processes that could explain our observations,” he says. “We even thought that they might be due to Very Low Frequency (VLF) transmitters used for naval communications.”

The lightbulb moment, however, came when Feinland compared the bursts to records of lightning strikes in North America. Intriguingly, he found that several of the peaks in the electron bursts seemed to happen less than a second after the lighting strikes.

A lightning trigger

The researchers’ explanation for this is that radio waves produced after a lightning strike interact with electrons in the inner belt. These electrons then begin to oscillate between the Earth’s northern and southern hemispheres with a period of just 0.2 seconds. With each oscillation, some electrons drop out of the inner belt and into the atmosphere. This last finding was unexpected: while researchers knew that high-energy electrons can fall into the atmosphere from the outer radiation belt, this is the first time that they have observed them coming from the inner belt.

Feinland says the team’s discovery could help space-launch firms and national agencies decide when to launch their most sensitive payloads. With further studies, he adds, it might even be possible to determine how long these high-energy electrons remain in the inner belt after geomagnetic storms. “If we can quantify these lifetimes, we could determine when it is safest to launch spacecraft,” he says.

The researchers are now seeking to calculate the exact energies of the electrons. “Some of them may be even more energetic than 1 MeV,” Feinland says.

The present work is detailed in Nature Communications.

The post Lightning sets off bursts of high-energy electrons in Earth’s inner radiation belt appeared first on Physics World.

]]>
Research update Unexpected finding could help determine the safest times to launch spacecraft https://physicsworld.com/wp-content/uploads/2024/11/lightning.jpg
First human retinal image brings sight-saving portable OCT a step closer https://physicsworld.com/a/first-human-retinal-image-brings-sight-saving-portable-oct-a-step-closer/ Thu, 07 Nov 2024 09:40:37 +0000 https://physicsworld.com/?p=117983 Siloton’s handheld OCT system could soon be used for home-based monitoring of retinal disease, and may one day find its way onto future space missions

The post First human retinal image brings sight-saving portable OCT a step closer appeared first on Physics World.

]]>
Image of a human retina taken with the Akepa photonic chip

UK health technology start-up Siloton is developing a portable optical coherence tomography (OCT) system that uses photonic integrated circuits to miniaturize a tabletop’s-worth of expensive and fragile optical components onto a single coin-sized chip. In a first demonstration by a commercial organization, Siloton has now used its photonic chip technology to capture a sub-surface image of a human retina.

OCT is a non-invasive imaging technique employed as the clinical gold standard for diagnosing retinal disease. Current systems, however, are bulky and expensive and only available at hospital clinics or opticians. Siloton aims to apply its photonic chip – the optical equivalent of an electronic chip – to create a rugged, portable OCT system that patients could use to monitor disease progression in their own homes.

Siloton's Akepa photonic chip

The image obtained using Siloton’s first-generation OCT chip, called Akepa, reveals the fine layered structure of the retina in a healthy human eye. It clearly shows layers such as the outer photoreceptor segment and the retinal pigment epithelium, which are key clinical features for diagnosing and monitoring eye diseases.

“The system imaged the part of the retina that’s responsible for all of your central vision, most of your colour vision and the fine detail that you see,” explains Alasdair Price, Siloton’s CEO. “This is the part of the eye that you really care about looking at to detect disease biomarkers for conditions like age-related macular degeneration [AMD] or various diabetic eye conditions.”

Faster and clearer

Since Siloton first demonstrated that Akepa could acquire OCT images of a retinal phantom, the company has deployed some major software enhancements. For example, while the system previously took 5 min to image the phantom – an impractical length of time for human imaging – the imaging speed is now less than a second. The team is also exploring ways to improve image quality using artificial intelligence techniques.

Price explains that the latest image was recorded using the photonic chip in a benchtop set-up, noting that the company is about halfway through the process of miniaturizing all of the optics and electronics into a handheld binocular device.

“The electronics is all off-the-shelf, so we’re not going to focus too heavily on miniaturizing that until right at the end,” he says. “The innovative part is in miniaturizing the optics. We are very close to having it in that binocular headset now, the aim being that by early next year we will have that fully miniaturized.”

As such, the company plans to start deploying some research-only systems commercially next year. These will be handheld binocular-style devices that users hold up to their faces, complete with a base station for charging and communications. Speaking with over 100 patients in focus groups, Siloton confirmed that they prefer this binocular design over the traditional chin rest employed in full-size OCT systems.

“We were worried about that because we thought we may not be able to get the level of stability required,” says Price. “But we did further tests on the stability of the binocular system compared with the chin rest and actually found that the binoculars showed greater stability. Right now we’re still using a chin rest, so we’re hopeful that the binocular system will further improve our ability to record high-quality images.”

The Siloton founding team

Expanding applications

The principal aim of Siloton’s portable OCT system is to make the diagnosis and monitoring of eye diseases – such as diabetic macular oedema, retinal vein occlusion and AMD, the leading cause of sight loss in the developed world – more affordable and accessible.

Neovascular or “wet” AMD, for example, can be treated with regular eye injections, but this requires regular OCT scans at hospital appointments, which may not be available frequently enough for effective monitoring. With an OCT system in their own homes, patients can scan themselves every few days, enabling timely treatments as soon as disease progression is detected – as well as saving hospitals substantial amounts of money.

Ongoing improvements in “quality versus cost” of the Akepa chip has also enabled Siloton to expand its target applications outside of ophthalmology. The ability to image structures such as the optic nerve, for example, enables the use of OCT to screen for optic neuritis, a common early symptom in patients with multiple sclerosis.

The company is also working with the European Space Agency (ESA) on a project investigating spaceflight-associated neuro-ocular syndrome (SANS), a condition suffered by about 70% of astronauts and which requires regular monitoring.

“At the moment, there is an OCT system on the International Space Station. But for longer-distance space missions, things like Gateway, there won’t be room for such a large system,” Price tells Physics World. “So we’re working with ESA to look at getting our chip technology onto future space missions.”

The post First human retinal image brings sight-saving portable OCT a step closer appeared first on Physics World.

]]>
Research update Siloton’s handheld OCT system could soon be used for home-based monitoring of retinal disease, and may one day find its way onto future space missions https://physicsworld.com/wp-content/uploads/2024/11/7-11-24-Siloton_chips.jpg newsletter1
‘Buddy star’ could explain Betelgeuse’s varying brightness https://physicsworld.com/a/buddy-star-could-explain-betelgeuses-varying-brightness/ Wed, 06 Nov 2024 15:00:34 +0000 https://physicsworld.com/?p=117988 As-yet-undetected stellar companion would displace light-blocking dust

The post ‘Buddy star’ could explain Betelgeuse’s varying brightness appeared first on Physics World.

]]>
An unseen low-mass companion star may be responsible for the recently observed “Great Dimming” of the red supergiant star Betelgeuse. According to this hypothesis, which was put forward by researchers in the US and Hungary, the star’s apparent brightness varies when an orbiting companion – dubbed α Ori B or, less formally, “Betelbuddy” – displaces light-blocking dust, thereby changing how much of Betelgeuse’s light reaches the Earth.

Located about 548 light-years away, in the constellation Orion, Betelgeuse is the 10th brightest star in the night sky. Usually, its brightness varies over a period of 416 days, but in 2019–2020, its output dropped to the lowest level ever recorded.

At the time, some astrophysicists speculated that this “Great Dimming” might mean that the star was reaching the end of its life and would soon explode as a supernova. Over the next three years, however, Betelgeuse’s brightness recovered, and alternative hypotheses gained favour. One such suggestion is that a cooler spot formed on the star and began ejecting material and dust, causing its light to dim as seen from Earth.

Pulsation periods

The latest hypothesis was inspired, in part, by the fact that Betelgeuse experiences another cycle in addition to its fundamental 416-day pulsation period. This second cycle, known as the long secondary period (LSP), lasts 2170 days, and the Great Dimming occurred after its minimum brightness coincided with a minimum in the 416-day cycle.

While astrophysicists are not entirely sure what causes LSPs, one leading theory suggest that they stem from a companion star. As this companion orbits its parent star, it displaces the cosmic dust the star produces and expels, which in turn changes the amount of starlight that reaches us.

Lots of observational data

To understand whether this might be happening with Betelgeuse, a team led by Jared Goldberg at the Flatiron Institute’s Center for Computational Astrophysics; Meridith Joyce at the University of Wyoming; and László Molnár of the Konkoly Observatory, HUN-REN CSFK, Budapest; analysed a wealth of observational data from the American Association of Variable Star Observers. “This association has been collecting data from both professional and amateur astronomers, so we had access to decades worth of data,” explains Molnár. “We also looked at data from the space-based SMEI instrument and spectroscopic observations collected by the STELLA robotic telescope.”

The researchers combined these direct-observation data with advanced computer models that simulate Betelgeuse’s activity. When they studied how the star’s brightness and its velocity varied relative to each other, they realized that the brightest phase must correspond to a companion being in front of it. “This is the opposite of what others have proposed,” Molnár notes. “For example, one popular hypothesis postulates that companions are enveloped in dense dust clouds, obscuring the giant star when they pass in front of them. But in this case, the companion must remove dust from its vicinity.”

As for how the companion does this, Molnár says they are not sure whether it evaporates the dust away or shepherds it to the opposite side of Betelgeuse with its gravitational pull. Both are possible, and Goldberg adds that other processes may also contribute. “Our new hypothesis complements the previous one involving the formation of a cooler spot on the star that ejects material and dust,” he says. “The dust ejection could occur because the companion star was out of the way, behind Betelgeuse rather than along the line of sight.”

The least absurd of all hypotheses?

The prospect of a connection between an LSP and the activity of a companion star is a longstanding one, Goldberg tells Physics World. “We know the Betelgeuse has an LSP and if an LSP exists, that means a ‘buddy’ for Betelgeuse,” he says.

The researchers weren’t always so confident, though. Indeed, they initially thought the idea of a companion star for Betelgeuse was absurd, so the hardest part of their work was to prove to themselves that this was, in fact, the least absurd of all hypotheses for what was causing the LSP.

“We’ve been interested in Betelgeuse for a while now, and in a previous paper, led by Meridith, we already provided new size, distance and mass estimates for the star based on our models,” says Molnár. “Our new data started to point in one direction, but first we had to convince ourselves that we were right and that our claims are novel.”

The findings could have more far-reaching implications, he adds. While around one third of all red giants and supergiants have LSPs, the relationships between LSPs and brightness vary. “There are therefore a host of targets out there and potentially a need for more detailed models on how companions and dust clouds may interact,” Molnár says.

The researchers are now applying for observing time on space telescopes in hopes of finding direct evidence that the companion exists. One challenge they face is that because Betelgeuse is so bright – indeed, too bright for many sensitive instruments – a “Betelbuddy”, as Goldberg has nicknamed it, may be simpler to explain than it is to observe. “We’re throwing everything we can at it to actually find it,” Molnár says. “We have some ideas on how to detect its radiation in a way that can be separated from the absolute deluge of light Betelgeuse is producing, but we have to collect and analyse our data first.”

The study is published in The Astrophysical Journal.

The post ‘Buddy star’ could explain Betelgeuse’s varying brightness appeared first on Physics World.

]]>
Research update As-yet-undetected stellar companion would displace light-blocking dust https://physicsworld.com/wp-content/uploads/2024/11/Low-Res_BetelBuddy-Fig02.jpg
Black hole in rare triple system sheds light on natal kicks https://physicsworld.com/a/black-hole-in-rare-triple-system-sheds-light-on-natal-kicks/ Wed, 06 Nov 2024 12:42:29 +0000 https://physicsworld.com/?p=118020 V404 Cygni includes near and distant companion stars

The post Black hole in rare triple system sheds light on natal kicks appeared first on Physics World.

]]>
For the first time, astronomers have observed a black hole in a triple system with two other stars. The system is called V404 Cygni and was previously thought to be a closely-knit binary comprising a black hole and a star. Now, Kevin Burdge and colleagues at the Massachusetts Institute of Technology (MIT) have shown that the pair is orbited by a more distant tertiary star.

The observation supports the idea that some black holes do not experience a “natal kick” in momentum when they form. This is expected if a black hole is created from the sudden implosion of a star, rather than in a supernova explosion.

When black holes and neutron stars are born, they can gain momentum through mechanisms that are not well understood. These natal kicks can accelerate some neutron stars to speeds of hundreds of kilometres per second. For black holes, the kick is expected to be less pronounced — and in some scenarios, astronomers believe that these kicks must be very small.

Information about natal kicks can be gleaned by studying the behaviour of X-ray binaries, which usually pair a main sequence star with a black hole or neutron star companion. As these two objects orbit each other, material from the star is transferred to its companion, releasing vast amounts of gravitational potential energy as X-rays and other electromagnetic radiation.

Wobbling objects

In such binaries, any natal kick the black hole may have received during its formation can be deduced by studying how the black hole and its companion star orbit each other. This can be done using the radial velocity (or wobble) technique, which measures the Doppler shift of light from the orbiting objects as they accelerate towards and then away from an observer on Earth.

In their study, Burdge’s team scrutinized archival observations of V404 Cygni that were made using a number of different optical telescopes. A bright blob of light thought to be the black hole and its close-knit companion star is prominent in these images. But the team noticed something else, a second blob of light that could be a star orbiting the close-knit binary.

“We immediately noticed that there was another star next to the binary system, moving together with it,” Burdge explains. “It was almost like a happy accident, but was a direct product of an optical and an X-ray astronomer working together.”

As Burdge describes, the study came as a result of integrating his own work in optical astronomy with the expertise of MIT’s Erin Kara, who does X-ray astronomy on black holes. Burge adds, “We were thinking about whether it might be interesting to take high speed movies of black holes. While thinking about this, we went and looked at a picture of V404 Cygni, taken in visible light.”

Hierarchical triple

The observation provided the team with clear evidence that V404 Cygni is part of a “hierarchical triple” – an observational first. “In the system, a black hole is eating a star which orbits it every 6.5 days. But there is another star way out there that takes 70,000 years to complete its orbit around the inner system,” Burdge explains. Indeed, the third star is about 3500 au (3500 times the distance from the Earth to the Sun) from the black hole.

By studying these orbits, the team gleaned important information about the black hole’s formation. If it had undergone a natal kick when its progenitor star collapsed, the tertiary system would have become more chaotic – causing the more distant star to unbind from the inner binary pair.

The team also determined that the outer star is in the later stages of its main-sequence evolution. This suggests that V404 Cygni’s black hole must have formed between 3–5::billion years ago. When the black hole formed, the researchers believe it would have removed at least half of the mass from its binary companion. But since the black hole still has a relatively low mass, this means that its progenitor star must have lost very little mass as it collapsed.

“The black hole must have formed through a gentle process, without getting a big kick like one might expect from a supernova,” Burdge explains. “One possibility is that the black hole formed from the implosion of a star.”

If this were the case, the star would have collapsed into a black hole directly, without large amounts of matter being ejected in a supernova explosion. Whether or not this is correct, the team’s observations suggest that at least some black holes can form with no natal kick – providing deeper insights into the later stages of stellar evolution.

The research is described in Nature.

The post Black hole in rare triple system sheds light on natal kicks appeared first on Physics World.

]]>
Research update V404 Cygni includes near and distant companion stars https://physicsworld.com/wp-content/uploads/2024/11/6-11-2024-black-hole-triplet.jpg
UK particle physicist Mark Thomson selected as next CERN boss https://physicsworld.com/a/uk-particle-physicist-mark-thomson-selected-as-next-cern-boss/ Wed, 06 Nov 2024 12:29:24 +0000 https://physicsworld.com/?p=118022 Thomson will become the 17th director-general of the CERN particle-physics laboratory when he takes up the position in 2026

The post UK particle physicist Mark Thomson selected as next CERN boss appeared first on Physics World.

]]>
The UK particle physicist Mark Thomson has been selected as the 17th director-general of the CERN particle-physics laboratory. Thomson, 58, was chosen today at a meeting of the CERN Council. He will take up the position on 1 January 2026 for a five-year period succeeding the current CERN boss Fabiola Gianotti, who will finish her second term next year.

Three candidates were shortlisted for the job after being put forward by a search committee. Physics World understands that the Dutch theoretical physicist and former Dutch science minister Robbert Dijkgraaf was also considered for the position. The other was reported to have been Greek particle physicist Paris Sphicas.

With a PhD in physics from the University of Oxford, Thomson is currently executive chair of the Science and Technology Facilities Council (STFC), one of the main funding agencies in the UK. He spent a significant part of career at CERN working on precise measurements of the W and Z boson in the 1990s as part of the OPAL experiment at CERN’s Large Electron-Positron Collider.

In 2000 he moved back to the UK to take up a position in experimental particle physics at the University of Cambridge. He was then a member of the ATLAS collaboration at CERN’s Large Hadron Collider (LHC) and between 2015 and 2018 served as co-spokesperson for the US Deep Underground Neutrino Experiment. Since 2018 he has served as the UK delegate to CERN’s Council.

Thomson was selected for his managerial credentials in science and connection to CERN. “Thomson is a talented physicist with great managerial experience,” notes Gianotti. “I have had the opportunity to collaborate with him in several contexts over the past years and I am confident he will make an excellent director-general. I am pleased to hand over this important role to him at the end of 2025.”

“Thomson’s election is great news – he has the scientific credentials, experience, and vision to ensure that CERN’s future is just as bright as its past, and it remains at the absolute cutting edge of research,” notes Peter Kyle, UK secretary of state for science, innovation and technology.“Work that is happening at CERN right now will be critical to scientific endeavour for decades to come, and for how we tackle some of the biggest challenges facing humanity.”

‘The right person’

Dirk Ryckbosch, a particle physicist at Ghent University and a delegate for Belgium in the CERN Council, told Physics World that Thomson is a “perfect match” for CERN. “As a former employee and a current member of the council, Thomson knows the ins and outs of CERN and he has the experience needed to lead a large research organization,” adds Ryckbosch.

The last UK director-general of CERN was Chris Llewellyn Smith who held the position between 1994 and 1998. Yet Ryckbosch acknowledges that within CERN, Brexit has never clouded the relationship between the UK and EU member states. “The UK has always remained a strong and loyal partner,” he says.

Thomson will have two big tasks when he becomes CERN boss in 2026: ensuring the start of operations with the upgraded LHC, known as the High-Luminosity LHC (HL-LHC) by 2030, and securing plans for the LHC’s successor.

CERN has currently put its weight behind the Future Circular Collider (FCC), which will cost about £12bn and be four times as large as the LHC with a 91 km circumference. The FCC would first be built as an electron-positron collider with the aim of studying the Higgs boson in unprecedented detail. It could later be upgraded as a hadron collider, known as the FCC-hh.

The construction of the FCC will, however, require additional funding from CERN member states. Earlier this year Germany, which is a main contributor to CERN’s annual budget, publicly objected to the FCC’s high cost. Garnering support from the FCC, if CERN selects it as its next project, will be a delicate balancing act for Thomson. “With his international network and his diplomatic skills, Mark is the right person for this,” concludes Ryckbosch.

That view is backed by particle theorist John Ellis from King’s College London, who told Physics World that Thomson has the “ideal profile for guiding CERN during the selection and initiation of its next major accelerator project”. Ellis adds that Thomson “brings to the role a strong record of research in collider physics as well as studies of electron-positron colliders and leadership in the DUNE neutrino experiment and also extensive managerial experience”.

The post UK particle physicist Mark Thomson selected as next CERN boss appeared first on Physics World.

]]>
News Thomson will become the 17th director-general of the CERN particle-physics laboratory when he takes up the position in 2026 https://physicsworld.com/wp-content/uploads/2024/11/Mark-Thomson-06_11_24.jpg newsletter1
Timber! Japan launches world’s first wooden satellite into space https://physicsworld.com/a/timber-japan-launches-worlds-first-wooden-satellite-into-space/ Tue, 05 Nov 2024 17:31:39 +0000 https://physicsworld.com/?p=117985 LignoSat will test the possibilities of building wooden human habitats in space

The post Timber! Japan launches world’s first wooden satellite into space appeared first on Physics World.

]]>
Researchers in Japan have launched the world’s first wooden satellite to test the feasibility of using timber in space. Dubbed LignoSat2, the small “cubesat” was developed by Kyoto University and the logging firm Sumitomo Forestry. It was launched on 4 November to the International Space Station (ISS) from the Kennedy Space Center in Florida by a SpaceX Falcon 9 rocket.

Given the lack of water and oxygen in space, wood is potentially more durable in orbit than it is on Earth where it can rot or burn. This makes it an attractive and sustainable alternative to metals such as aluminium that can create aluminium oxide particles during re-entry into the Earth’s atmosphere.

Work began on LignoSat in 2020. In 2022 scientists at Kyoto sent samples of cherry, birch and magnolia wood to the ISS where the materials were exposed to the harsh environment of space for 240 days to test their durability.

While each specimen performed well with no clear deformation, the researchers settled on building LignoSat from magnolia – or Hoonoki in Japanese. This type of wood has traditionally been used for sword sheaths and is known for its strength and stability.

LignoSat2 is made without screws of glue and is equipped with external solar panels and encased in an aluminium frame. Next month the satellite is expected to be deployed in orbit around the Earth for about six months to measure how the wood withstands the environment and how well it protects the chips inside the satellite from cosmic radiation.

Data will be collected on the wood’s expansion and contraction, the internal temperature and the performance of the electronic components inside.

Researchers are hopeful that if LignoSat is successful it could pave the way for satellites to be made from wood. This would be more environmentally friendly given that each satellite would simply burn up when it re-enters the atmosphere at the end of its lifetime.

“With timber, a material we can produce by ourselves, we will be able to build houses, live and work in space forever,” astronaut Takao Doi who studies human space activities at Kyoto University told Reuters.

The post Timber! Japan launches world’s first wooden satellite into space appeared first on Physics World.

]]>
News LignoSat will test the possibilities of building wooden human habitats in space https://physicsworld.com/wp-content/uploads/2024/11/SpaceXlaunch_4-November.jpg newsletter1
Physicists propose new solution to the neutron lifetime puzzle https://physicsworld.com/a/physicists-propose-new-solution-to-the-neutron-lifetime-puzzle/ Tue, 05 Nov 2024 14:00:31 +0000 https://physicsworld.com/?p=117907 Different neutron states could explain why different experiments produce different figures for how long neutrons survive before decaying

The post Physicists propose new solution to the neutron lifetime puzzle appeared first on Physics World.

]]>
Neutrons inside the atomic nucleus are incredibly stable, but free neutrons decay within 15 minutes – give or take a few seconds. The reason we don’t know this figure more precisely is that the two main techniques used to measure it produce conflicting results. This so-called neutron lifetime problem has perplexed scientists for decades, but now physicists at TU Wien in Austria have come up with a possible explanation. The difference in lifetimes, they say, could stem from the neutron being in not-yet-discovered excited states that have different lifetimes as well as different energies.

According to the Standard Model of particle physics, free neutrons undergo a process called beta decay that transforms a neutron into a proton, an electron and an antineutrino. To measure the neutrons’ average lifetime, physicists employ two techniques. The first, known as the bottle technique, involves housing neutrons within a container and then counting how many of them remain after a certain amount of time. The second approach, known as the beam technique, is to fire a neutron beam with a known intensity through an electromagnetic trap and measure how many protons exit the trap within a fixed interval.

Researchers have been performing these experiments for nearly 30 years but they always encounter the same problem: the bottle technique yields an average neutron survival time of 880 s, while the beam method produces a lifetime of 888 s. Importantly, this eight-second difference is larger than the uncertainties of the measurements, meaning that known sources of error cannot explain it.

A mix of different neutron states?

A team led by Benjamin Koch and Felix Hummel of TU Wien’s Institute of Theoretical Physics is now suggesting that the discrepancy could be caused by nuclear decay producing free neutrons in a mix of different states. Some neutrons might be in the ground state, for example, while others could be in a higher-energy excited state. This would alter the neutrons’ lifetimes, they say, because elements in the so-called transition matrix that describes how neutrons decay into protons would be different for neutrons in excited states and neutrons in ground states.

As for how this would translate into different beam and bottle lifetime measurements, the team say that neutron beams would naturally contain several different neutron states. Neutrons in a bottle, in contrast, would almost all be in the ground state – simply because they would have had time to cool down before being measured in the container.

Towards experimental tests

Could these different states be detected? The researchers say it’s possible, but they caution that experiments will be needed to prove it. They also note that theirs is not the first hypothesis put forward to explain the neutron lifetime discrepancy. Perhaps the simplest explanation is that the gap stems from unknown systematic errors in either the beam experiment, the bottle experiment, or both. Other, more theoretical approaches have also been proposed, but Koch says they do not align with existing experimental data.

“Personally, I find hypotheses that require fewer and smaller new assumptions – and that are experimentally testable – more appealing,” Koch says. As an example, he cites a 2020 study showing that a phenomenon called the inverse quantum Zeno effect could speed up the decay of bottle-confined neutrons, calling it “an interesting idea”. Another possible explanation of the puzzle, which he says he finds “very intriguing” has just been published and describes the admixture of novel bound electron-proton states in the final state of a weak decay, known as “Second Flavor Hydrogen Atoms”.

As someone with a background in quantum gravity and theoretical physics beyond the Standard Model, Koch is no stranger to predictions that are hard (and sometimes impossible, at least in the near term) to test. “Contributing to the understanding of a longstanding problem in physics with a hypothesis that could be experimentally tested soon is therefore particularly exciting for me,” he tells Physics World. “If our hypothesis of excited neutron states is confirmed by future experiments, it would shed a completely new light on the structure of neutral nuclear matter.”

The researchers now plan to collaborate with colleagues from the Institute for Atomic and Subatomic Physics at TU Wien to revaluate existing experimental data and explore various theoretical models. “We’re also hopeful about designing experiments specifically aimed at testing our hypothesis,” Koch reveals.

The present study is detailed in Physical Review D.

The post Physicists propose new solution to the neutron lifetime puzzle appeared first on Physics World.

]]>
Research update Different neutron states could explain why different experiments produce different figures for how long neutrons survive before decaying https://physicsworld.com/wp-content/uploads/2024/11/Low-Res_Neutronen.jpg newsletter1
Women and physics: navigating history, careers, and the path forward https://physicsworld.com/a/women-and-physics-navigating-history-careers-and-the-path-forward/ Tue, 05 Nov 2024 11:31:59 +0000 https://physicsworld.com/?p=117838 This webinar run by IOP Publishing ebooks, explores the historical journey, challenges, and achievements of women in physics

The post Women and physics: navigating history, careers, and the path forward appeared first on Physics World.

]]>

Join us for an insightful webinar based on Women and Physics (Second Edition), where we will explore the historical journey, challenges, and achievements of women in the field of physics, with a focus on English-speaking countries. The session will dive into various topics such as the historical role of women in physics, the current statistics on female representation in education and careers, navigating family life and career, and the critical role men play in fostering a supportive environment. The webinar aims to provide a roadmap for women looking to thrive in physics.

Laura McCullough is a professor of physics at the University of Wisconsin-Stout. Her PhD from the University of Minnesota was in science education with a focus on physics education research. She is the recipient of multiple awards, including her university system’s highest teaching award, her university’s outstanding research award, and her professional society’s service award. She is a fellow of the American Association of Physics Teachers. Her primary research area is gender and science and surrounding issues. She has also done significant work on women in leadership, and on students with disabilities.

About this ebook

Women and Physics is the second edition of a volume that brings together research on a wide variety of topics relating to gender and physics, cataloguing the extant literature to provide a readable and concise grounding for the reader. While there are many biographies and collections of essays in the area of women and physics, no other book is as research focused. Starting with the current numbers of women in physics in English-speaking countries, it explores the different issues relating to gender and physics at different educational levels and career stages. From the effects of family and schooling to the barriers faced in the workplace and at home, this volume is an exhaustive overview of the many studies focused specifically on women and physics. This edition contains updated references and new chapters covering the underlying structures of the research and more detailed breakdowns of career issues.

The post Women and physics: navigating history, careers, and the path forward appeared first on Physics World.

]]>
Webinar This webinar run by IOP Publishing ebooks, explores the historical journey, challenges, and achievements of women in physics https://physicsworld.com/wp-content/uploads/2024/11/9780750364355_FC_fullsize.jpg
Why AI is a force for good in science communication https://physicsworld.com/a/why-ai-is-a-force-for-good-in-science-communication/ Tue, 05 Nov 2024 10:00:13 +0000 https://physicsworld.com/?p=117436 Claire Malone looks at the ups and downs of using AI to communicate science

The post Why AI is a force for good in science communication appeared first on Physics World.

]]>
In August 2024 the influential Australian popular-science magazine Cosmos found itself not just reporting the news – it had become the news. Owned by CSIRO Publishing – part of Australia’s national science agency – Cosmos had posted a series of “explainer” articles on its website that had been written by generative artificial intelligence (AI) as part of an experiment funded by Australia’s Walkley Foundation. Covering topics such as black holes and carbon sinks, the text had been fact-checked against the magazine’s archive of more than 15,000 past articles to negate the worry of misinformation, but at least one of the new articles contained inaccuracies.

Critics, such as the science writer Jackson Ryan, were quick to condemn the magazine’s experiment as undermining and devaluing high-quality science journalism. As Ryan wrote on his Substack blog, AI not only makes things up and trains itself on copyrighted material, but “for the most part, provides corpse-cold, boring-ass prose”. Contributors and former staff also complained to Australia’s ABC News that they’d been unaware of the experiment, which took place just a few months after the magazine had made five of its eight staff redundant.

It’s all too easy for AI to get things wrong and contribute to the deluge of online misinformation

The Cosmos incident is a reminder that we’re in the early days of using generative AI in science journalism. It’s all too easy for AI to get things wrong and contribute to the deluge of online misinformation, potentially damaging modern society in which science and technology shape so many aspects of our lives. Accurate, high-quality science communication is vital, especially if we are to pique the public’s interest in physics and encourage more people into the subject.

Kanta Dihal, a lecturer at Imperial College London who researchers the public’s understanding of AI, warns that the impacts of recent advances in generative AI on science communication are “in many ways more concerning than exciting”. Sure, AI can level the playing field by, for example, enabling students to learn video editing skills without expensive tools and helping people with disabilities to access course material in accessible formats. “[But there is also] the immediate large-scale misuse and misinformation,” Dihal says.

We do need to take these concerns seriously, but AI could benefit science communication in ways you might not realize. Simply put, AI is here to stay – in fact, the science behind it led to the physicist John Hopfield and computer scientist Geoffrey Hinton winning the 2024 Nobel Prize for Physics. So how can we marshal AI to best effect not just to do science but to tell the world about science?

Dangerous game

Generative AI is a step up from “machine learning”, where a computer predicts how a system will behave based on data it’s analysed. Machine learning is used in high-energy physics, for example, to model particle interactions and detector performance. It does this by learning to recognize patterns in existing data, before making predictions and then validating that those predictions match the original data. Machine learning saves researchers from having to manually sift through terabytes of data from experiments such as those at CERN’s Large Hadron Collider.

Generative AI, on the other hand, doesn’t just recognize and predict patterns – it can create new ones too. When it comes to the written word, a generative AI could, for example, invent a story from a few lines of input. It is exactly this language-generating capability that caused such a furore at Cosmos and led some journalists to worry that AI might one day make their jobs obsolete. But how does a generative AI produce replies that feel like a real conversation?

Claude Shannon holding a wooden mouse

Perhaps the best known generative AI is ChatGPT (where GPT stands for generative pre-trained transformer), which is an example of a Large Language Model (LLM). Language modelling dates back to the 1950s, when the US mathematician Claude Shannon applied information theory – the branch of maths that deals with quantifying, storing and transmitting information – to human language. Shannon measured how well language models could predict the next word in a sentence by assigning probabilities to each word based on patterns in the data the model is trained on.

Such methods of statistical language modelling are now fundamental to a range of natural language processing tasks, from building spell-checking software to translating between languages and even recognizing speech. Recent advances in these models have significantly extended the capabilities of generative AI tools, with the “chatbot” functionality of ChatGPT making it especially easy to use.

ChatGPT racked up a million users within five days of its launch in November 2022 and since then other companies have unveiled similar tools, notably Google’s Gemini and Perplexity. With more than 600 million users per month as of September 2024, ChatGPT is trained on a range of sources, including books, Wikipedia articles and chat logs (although the precise list is not explicitly described anywhere). The AI spots patterns in the training texts and builds sentences by predicting the most likely word that comes next.

ChatGPT operates a bit like a slot machine, with probabilities assigned to each possible next word in the sentence. In fact, the term AI is a little misleading, being more “statistically informed guessing” than real intelligence, which explains why ChatGPT has a tendency to make basic errors or “hallucinate”. Cade Metz, a technology reporter from the New York Times, reckons that chatbots invent information as much as 27% of the time.

One notable hallucination occurred in February 2023 when Bard – Google’s forerunner to Gemini – declared in its first public demonstration that the James Webb Space Telescope (JWST) had taken “the very first picture of a planet outside our solar system”. As Grant Tremblay from the US Center for Astrophysics pointed out, this feat had been accomplished in 2004, some 16 years before the JWST was launched, by the European Southern Observatory’s Very Large Telescope in Chile.

AI-generated image of a rat with significant errors

Another embarrassing incident was the comically anatomically incorrect picture of a rat created by the AI image generator Midjourney, which appeared in a journal paper that was subsequently retracted. Some hallucinations are more serious. Amateur mushroom pickers, for example, have been warned to steer clear of online foraging guides, likely written by AI, that contain information running counter to safe foraging practices. Many edible wild mushrooms look deceptively similar to their toxic counterparts, making careful identification critical.

By using AI to write online content, we’re in danger of triggering a vicious circle of increasingly misleading statements, polluting the Internet with unverified output. What’s more, AI can perpetuate existing biases in society. Google, for example, was forced to publish an embarrassing apology, saying it would “pause” the ability to generate images with Gemini after the service was used to create images of racially diverse Nazi soldiers,

More seriously, women and some minority groups are under-represented in healthcare data, biasing the training set and potentially skewing the recommendations of predictive AI algorithms. One study led by Laleh Seyyed-Kalantari from the University of Toronto (Nature Medicine 27 2176) found that computer-aided diagnosis of chest X-rays are less accurate for Black patients than white patients.

Generative AI could even increase inequalities if it becomes too commercial. “Right now there’s a lot of free generative AI available, but I can also see that getting more unequal in the very near future,” Dihal warns. People who can afford to pay for ChatGPT subscriptions, for example, have access to versions of the AI based on more up-to-date training data. They therefore get better responses than users restricted to the “free” version.

Clear communication

But generative AI tools can do much more than churn out uninspired articles and create problems. One beauty of ChatGPT is that users interact with it conversationally, just like you’d talk to a human communicator at a science museum or science festival. You could start by typing something simple (such as “What is quantum entanglement?”) before delving into the details (e.g. “What kind of physical systems are used to create it?”). You’ll get answers that meet your needs better than any standard textbook.

Teenage girl using laptop at home

Generative AI could also boost access to physics by providing an interactive way to engage with groups – such as girls, people of colour or students from low-income backgrounds – who might face barriers to accessing educational resources in more traditional formats. That’s the idea behind online tuition platforms such as Khan Academy, which has integrated a customized version of ChatGPT into its tuition services.

Instead of presenting fully formed answers to questions, its generative AI is programmed to prompt users to work out the solution themselves. If a student types, say, “I want to understand gravity” into Khan’s generative AI-powered tutoring program, the AI will first ask what the student already knows about the subject. The “conversation” between the student and the chatbot will then evolve in the light of the student’s response.

As someone with cerebral palsy, AI has transformed how I work by enabling me to turn my speech into text in an instant

AI can also remove barriers that some people face in communicating science, allowing a wider range of voices to be heard and thereby boosting the public’s trust in science. As someone with cerebral palsy, AI has transformed how I work by enabling me to turn my speech into text in an instant (see box below).

It’s also helped Duncan Yellowlees, a dyslexic research developer who trains researchers to communicate. “I find writing long text really annoying, so I speak it into OtterAI, which converts the speech into text,” he says. The text is sent to ChatGPT, which converts it into a blog. “So it’s my thoughts, but I haven’t had to write them down.”

Then there’s Matthew Tosh, a physicist-turned-science presenter specializing in pyrotechnics. He has a progressive disease, which meant he faced an increasing struggle to write in a concise way. ChatGPT, however, lets him create draft social-media posts, which he then rewrites in his own sites. As a result, he can maintain that all-important social-media presence while managing his disability at the same time.

Despite the occasional mistake made by generative AI bots, misinformation is nothing new. “That’s part of human behaviour, unfortunately,” Tosh admits. In fact, he thinks errors can – perversely – be a positive. Students who wrongly think a kilo of cannonballs will fall faster than a kilo of feathers create the perfect chance for teachers to discuss Newtonian mechanics. “In some respects,” says Tosh, “a little bit of misinformation can start the conversation.”

AI as a voice-to-text tool

Claire Malone at her desk

As a science journalist – and previously as a researcher hunting for new particles in data from the ATLAS experiment at CERN – I’ve longed to use speech-to-text programs to complete assignments. That’s because I have a disability – cerebral palsy – that makes typing impractical. For a long time this meant I had to dictate my work to a team of academic assistants for many hours a week. But in 2023 I started using Voiceitt, an AI-powered app optimized for speech recognition for people with non-standard speech like mine.

You train the app by first reading out a couple of hundred short training phrases. It then deploys AI to apply thousands of hours of other non-standard speaker models in its database to optimize its training. As Voiceitt is used, it continues refining the AI model, improving speech recognition over time. The app also has a generative AI model to correct any grammatical errors created during transcription. Each week, I find myself correcting the app’s transcriptions less and less, which is a bonus when facing journalistic deadlines, such as the one for this article.

The perfect AI assistant?

One of the first news organizations to experiment with AI tools was Associated Press (AP), which in 2014 began automating routine financial stories about corporate earnings. AP now also uses AI to create transcripts of videos, write summaries of sports events, and spot trends in large stock-market data sets. Other news outlets use AI tools to speed up “back-office” tasks such as transcribing interviews, analysing information or converting data files. Tools such as MidJourney can even help journalists to brief professional illustrators to create images.

However, there is a fine line between using AI to speed up your workflow and letting it make content without human input. Many news outlets and writers’ associations have issued statements guaranteeing not to use generative AI as a replacement for human writers and editors. Physics World, for example, has pledged not to publish fresh content generated purely by AI, though the magazine does use AI to assist with transcribing and summarizing interviews.

So how can generative AI be incorporated into the effective and trustworthy communication of science? First, it’s vital to ask the right question – in fact, composing a prompt can take several attempts to get the desired output. When summarizing a document, for example, a good prompt should include the maximum word length, an indication of whether the summary should be in paragraphs or bullet points, and information about the target audience and required style or tone.

Generative AI is here to stay – and science communicators and journalists are still working out how best to use it to communicate science

Second, information obtained from AI needs to be fact checked. It can easily hallucinate, making a chatbot like an unreliable (but occasionally brilliant) colleague who can get the wrong end of the stick. “Don’t assume that whatever the tool is, that it is correct,” says Phil Robinson, editor of Chemistry World. “Use it like you’d use a peer or colleague who says ‘Have you tried this?’ or ‘Have you thought of that?’”

Finally, science communicators must be transparent in explaining how they used AI. Generative AI is here to stay – and science communicators and journalists are still working out how best to use it to communicate science. But if we are to maintain the quality of science journalism – so vital for the public’s trust in science – we must continuously evaluate and manage how AI is incorporated into the scientific information ecosystem.

Generative AI can help you say what you want to say. But as Dihal concludes: “It’s no substitute for having something to say.”

The post Why AI is a force for good in science communication appeared first on Physics World.

]]>
Feature Claire Malone looks at the ups and downs of using AI to communicate science https://physicsworld.com/wp-content/uploads/2024/10/2024-11-Malone-AI-bot-looming-over-society-2063425288-iStock_smartboy10.jpg newsletter
Space-based solar power: ‘We have nothing to lose and everything to gain’ https://physicsworld.com/a/space-based-solar-power-we-have-nothing-to-lose-and-everything-to-gain/ Mon, 04 Nov 2024 17:00:50 +0000 https://physicsworld.com/?p=117567 Martin Soltau explains why beaming sunlight to Earth as microwaves could help solve our energy needs

The post Space-based solar power: ‘We have nothing to lose and everything to gain’ appeared first on Physics World.

]]>
The most important and pressing issue of our times is the transition to clean energy while meeting rising global demand. Cheap, abundant and reliable energy underpins the quality of life for all – and one potentially exciting way to do this is space-based solar power (SBSP). It would involve capturing sunlight in space and beaming it as microwaves down to Earth, where it would be converted into electricity to power the grid.

For proponents of SBSP such as myself, it’s a hugely promising technology. Others, though, are more sceptical. Earlier this year, for example, NASA published a report from its Office of Technology, Policy and Strategy that questioned the cost and practicality of SBSP. Henri Barde, a retired engineer who used to work for the European Space Agency (ESA) in Noordwijk, the Netherlands, has also examined the technical challenges in a report for the IEEE.

Some of these sceptical positions on SBSP were addressed in a recent Physics World article by James McKenzie. Conventional solar power is cheap, he argued, so why bother putting large solar power satellites in space? After all, the biggest barriers to building more solar plants here on Earth aren’t technical, but mostly come in the form of belligerent planning officials and local residents who don’t want their views ruined.

However, in my view we need to take a whole-energy-system perspective to see why innovation is essential for the energy transition. Wind, solar and batteries are “low-density” renewables, requiring many tonnes of minerals to be mined and refined for each megawatt-hour of energy. How can this be sustainable and give us energy security, especially when so much of our supply of these minerals depends on production in China?

Low-density renewables also require a Herculean expansion in electricity grid transmission pylons and cables to connect them to users. Other drawbacks of wind and solar is that they depend on the weather and require suitable storage – which currently does not exist at the capacity or cost needed. These forms of energy also need duplicated back-up, which is expensive, and other sources of baseload power for times when it’s cloudy or there’s no wind.

Look to the skies

With no night or weather in space, however, a solar panel in space generates 13 times as much energy than the same panel on Earth. SBSP, if built, would generate power continuously, transmitted as microwaves through the atmosphere with almost no loss. It could therefore deliver baseload power 24 hours a day, irrespective of local weather conditions on Earth.

SBSP could easily produce more or less power as needed, effectively smoothing out the unpredictable and varying output from wind and solar

Another advantage of SBSP is that could easily produce more or less power as needed, effectively smoothing out the unpredictable and varying output from wind and solar. We currently do this using fossil-fuel-powered gas-fired “peaker” plants, which could therefore be put out to pasture. SBSP is also scalable, allowing the energy it produces to be easily exported to other nations without expensive cables, giving it a truly global impact.

A recent whole-energy-system study by researchers at Imperial College London concluded that introducing just 8 GW of SBSP into the UK’s energy mix would deliver system savings of over £4bn every year. In my view, which is shared by others too, the utility of SBSP is likely to be even greater when considering whole continents or global alliances. It can give us affordable and reliable clean energy.

My firm, Space Solar, has designed a solar-power satellite called CASSIOPeiA, which is more than twice as powerful – based on the key metric of power per unit mass – as ESA’s design. So far, we have built and successfully demonstrated our power beaming technology, and following £5m of engineering design work, we have arguably the most technically mature design in the world.

If all goes to plan, we’ll have our first commercial product by 2029. Offering 30 MW of power, it could be launched by a single Starship rocket, and scale to gigawatt systems from there. Sure, there are engineering challenges, but these are mostly based on ensuring that the economics remain competitive. Space Solar is also lucky in having world-class experts working in spacecraft engineering, advanced photovoltaics, power beaming and in-space robotics.

Brighter and better

But why then was NASA’s study so sceptical of SBSP? I think it was because the report made absurdly conservative assumptions of the economics. NASA assumed an operating life of only 10 years: so to run for 30 years, the whole solar power satellite would have to be built and launched three times. Yet satellites today generally last for more than 25 years, with most baselined for a minimum 15 year life.

The NASA report also assumed that a satellite launched by Starship would remain at around $1500/kg. However, other independent analyses, such as “Space: the dawn of a new age” produced in 2022 by Citi Group, have forecast that it will be an order of magnitude less – just at $100/kg – by 2040. I could go on as there are plenty more examples of risk-averse thinking in the NASA report.

Buried in the report, however, the study also looked at more reasonable scenarios than the “baseline” and concluded that “these conditions would make SBSP systems highly competitive with any assessed terrestrial renewable electricity production technology’s 2050 cost projections”. Curiously, these findings did not make it into the executive summary.

The NASA study has been widely criticized, including by former NASA physicist John Mankins, who invented another approach to space solar dubbed SPS Alpha. Speaking on a recent episode of the DownLink podcast, he suspected NASA’s gloomy stance may in part be because it focuses on space tech and space exploration rather than energy for Earth. NASA bosses might fear that if they were directed by Congress to pursue SBSP, money for other priorities might be at risk.

I also question Barde’s sceptical opinion of the technology of SBSP, which he expressed in an article for IEEE Spectrum. Barde appeared not to understand many of the design features that make SPBSP technically feasible. He wrote, for example, about “gigawatts of power coursing through microwave systems” of the solar panels on the satellite, which sounds ominous and challenging to achieve.

In reality, the gigawatts of sunlight are reflected onto a large area of photovoltaics containing a billion or so solar cells. Each cell, which includes an antenna and electronic components to convert the sunlight into microwaves, is arranged in a sandwich module just a few millimetres thick handling just 2 W of power. So although the satellite delivers gigawatts overall, the figure is much lower at the component level. What’s more, each cell can be made using tried and tested radio-frequency components.

As for Barde’s fears about thermal management – in other words, how we can stop the satellite from overheating – that has already been analysed in detail. The plan is to use passive radiative cooling without active systems. Barde also warns of temperature swings as the satellites pass through eclipse during the spring and autumn equinox. But this problem is common to all satellites and has, in any case, been analysed as part of our engineering work. In essence, Barde’s claim of “insurmountable technical difficulties” is simply his opinion.

Until the first solar power satellite is commissioned, there will always be sceptics [but] that was also true of reusable rockets and cubesats, both of which are now mainstream technology

Until the first solar power satellite is commissioned, there will always be sceptics of what we are doing. However, that was also true of reusable rockets and cubesats, both of which are now mainstream technology. SBSP is a “no-regrets” investment that will see huge environmental and economic benefits, with spin-off technologies in wireless power beaming, in-space assembly and photovoltaics.

It is the ultimate blend of space technology and societal benefit, which will inspire the next generation of students into physics and engineering. Currently, the UK has a leadership position in SBSP, and if we have the vision and ambition, there is nothing to lose and everything to gain from backing this. We just need to get on with the job.

The post Space-based solar power: ‘We have nothing to lose and everything to gain’ appeared first on Physics World.

]]>
Opinion and reviews Martin Soltau explains why beaming sunlight to Earth as microwaves could help solve our energy needs https://physicsworld.com/wp-content/uploads/2024/11/2024-10-Transactions-Solar-in-space-benefits-2245354009-shutterstock_andrey_l.jpg newsletter
Axion clouds around neutron stars could reveal dark matter origins https://physicsworld.com/a/axion-clouds-around-neutron-stars-could-reveal-dark-matter-origins/ Mon, 04 Nov 2024 09:00:00 +0000 https://physicsworld.com/?p=117827 Interactions between these as-yet hypothetical particles and neutron stars' strong magnetic field would produce photons that radio telescopes can detect

The post Axion clouds around neutron stars could reveal dark matter origins appeared first on Physics World.

]]>
Hypothetical particles called axions could form dense clouds around neutron stars – and if they do, they will give off signals that radio telescopes can detect, say researchers in the Netherlands, the UK and the US. Since axions are a possible candidate for the mysterious substance known as dark matter, this finding could bring us closer to understanding it.

Around 85% of the universe’s mass consists of matter that appears “dark” to us. We can observe its gravitational effect on structures such as galaxies, but we cannot observe it directly. This is because dark matter hardly interacts with anything as far as we know, making it very difficult to detect. So far, searches for dark matter on Earth and in space have found no evidence for any of the various dark matter candidates.

The new research raises hopes that axions could be different. These neutral, bosonic particles are extremely light and hardly interact with ordinary matter. They get their name from a brand of soap, having been first proposed in the 1970s as a way of “cleaning up” a problem in quantum chromodynamics (QCD). More recently, astronomers have suggested they could clean up cosmology, too, by playing a role in the formation of galaxies in the early universe. They would also be a clean start for particle physics, providing evidence for new physics beyond the Standard Model.

Signature signals

But how can we detect axions if they are almost invisible to us? In the latest work, researchers at the University of Amsterdam, Princeton University and the University of Oxford showed that axions, if they exist, will be produced in large quantities at the polar regions of neutron stars. (Axions may also be components of dark matter “halos” believed to be present in the universe, but this study investigated axions produced by neutron stars themselves.) While many axions produced in this way will escape, some will be captured by the stars’ strong gravitational field. Over millions of years, axions will therefore accumulate around neutron stars, forming a cloud dense enough to give off detectable signals.

To reach these conclusions, the researchers examined various axion cloud interaction mechanisms, including self-interaction, absorption by neutron star nuclei and electromagnetic interactions. They concluded that for most axion masses, it is the last mechanism – specifically, a process called resonant axion-photon mixing – that dominates. Notably, this mechanism should produce a stream of low-energy photons in the radiofrequency range.

The team also found that these radio emissions would be connected to four distinct phases of axion cloud evolution. These are a growth phase after the neutron star forms; a saturation phase during normal life; a magnetorotational decay phase towards the later stages of the star’s existence; and finally a large burst of radio waves when the neutron star dies.

Turn on the radio

The researchers say that several large radio telescopes around the globe could play a role in detecting these radiofrequency signatures. Examples include the Low-Frequency Array (LOFAR) in the Netherlands; the Murchison Widefield Array in Australia; and the Green Bank Telescope in the US. To optimize the chances of picking up an axion signal, the collaboration recommends specific observation times, bandwidths and signal-to-noise ratios that these radio telescopes should adhere to. By following these guidelines, they say, the LOFAR setup alone could detect up to four events per year.

Dion Noordhuis, a PhD student at Amsterdam and first author of a Physical Review X paper on the research, acknowledges that there could be other observational signals beyond those explored in the paper. These will require further investigation, and he suggests that a full understanding will require complementary efforts from multiple branches of physics, including particle (astro)physics, plasma physics and observational radioastronomy. “This work thereby opens up a new, cross-disciplinary field with lots of opportunities for future research,” he tells Physics World.

Sankarshana Srinivasan, an astrophysicist from the Ludwig Maximilian University in Munich, Germany, who was not involved in the research, agrees that the QCD axion is a well-motivated candidate for dark matter. The Amsterdam-Princeton-Oxford team’s biggest achievement, he says, is to realize how axion clouds could enhance the signal, while the team’s “state-of-the-art” modelling makes the work stand out. However, he also urges caution because all theories of axion-photon mixing around neutron stars make assumptions about the stars’ magnetospheres, which are still poorly understood.

The post Axion clouds around neutron stars could reveal dark matter origins appeared first on Physics World.

]]>
Research update Interactions between these as-yet hypothetical particles and neutron stars' strong magnetic field would produce photons that radio telescopes can detect https://physicsworld.com/wp-content/uploads/2024/11/4-11-2024-axion-cloud.jpg newsletter1
Universe’s lifespan too short for monkeys to type out Shakespeare’s works, finds study https://physicsworld.com/a/universes-lifespan-too-short-for-monkeys-to-type-out-shakespeares-works-finds-study/ Fri, 01 Nov 2024 15:00:57 +0000 https://physicsworld.com/?p=117819 About 200,000 monkeys could type out “I chimp, therefore I am” before the universe ends, however

The post Universe’s lifespan too short for monkeys to type out Shakespeare’s works, finds study appeared first on Physics World.

]]>
According to the well-known thought experiment, the infinite monkeys theorem, a monkey randomly pressing keys on a typewriter for an infinite amount of time would eventually type out the complete works of William Shakespeare purely by chance.

Yet a new analysis by two mathematicians in Australia finds that even a troop might not have the time to do so within the supposed timeframe of the universe.

To find out, the duo created a model that includes 30 keys – all the letters in the English language plus punctuation marks. They assumed a constant chimpanzee population of 200,000 could be enlisted to the task, each typing at one key per second until the end of the universe in about 10100 years.

“We decided to look at the probability of a given string of letters being typed by a finite number of monkeys within a finite time period consistent with estimates for the lifespan of our universe,” notes mathematician Stephen Woodcock from the University of Technology Sydney.

The mathematicians found that there is only a 5% chance a single monkey would type “bananas” within its own lifetime of just over 30 years. Yet even with all the chimps feverishly typing away, they would not be able to produce Shakespeare’s entire works (coming in at over 850,000 words) before the universe ends. They would, however, be able to type “I chimp, therefore I am”.

“It is not plausible that, even with improved typing speeds or an increase in chimpanzee populations, monkey labour will ever be a viable tool for developing non-trivial written works,” the authors conclude, adding that while the infinite monkeys theorem is true, it is also “somewhat misleading”, or rather it’s “not to be” in reality.

The post Universe’s lifespan too short for monkeys to type out Shakespeare’s works, finds study appeared first on Physics World.

]]>
Blog About 200,000 monkeys could type out “I chimp, therefore I am” before the universe ends, however https://physicsworld.com/wp-content/uploads/2024/11/chimp-with-laptop-92236499-iStock_GlobalP.jpg
Nanocrystal shape affects molecular binding https://physicsworld.com/a/nanocrystal-shape-affects-molecular-binding/ Fri, 01 Nov 2024 13:00:16 +0000 https://physicsworld.com/?p=117811 Flatter nanocrystals are unexpectedly good at getting ligands to attach to them, which could have applications in electronics and biomedical science

The post Nanocrystal shape affects molecular binding appeared first on Physics World.

]]>
Molecules known as ligands attach more densely to flatter, platelet-shaped semiconductor nanocrystals than they do to spherical ones – a counterintuitive result that could lead to improvements in LEDs and solar cells as well as applications in biomedicine. While spherical nanoparticles are more curved than platelets, and were therefore expected to have the highest density of ligands on their surfaces, Guohua Jia and colleagues at Australia’s Curtin University say they observed the exact opposite.

“We found that the density of a commonly employed ligand, oleylamine (OLA), on the surface of zinc sulphide (ZnS) nanoparticles is highest for nanoplatelets, followed by nanorods and finally nanospheres,” Jia says.

Colloidal semiconductor nanocrystals show promise for a host of technologies, including field-effect transistors, chemical catalysis and fluorescent biomedical imaging as well as LEDs and photovoltaic cells. Because nanocrystals have a large surface area relative to their volume, their surfaces play an important role in many physical and chemical processes.

Notably, these surfaces can be modified and functionalized with ligands, which are typically smaller molecules such as long-chain amines, thiols, phosphines and phosphonates. The presence of these ligands changes the nanocrystals’ behaviour and properties. For example, they can make the nanocrystals hydrophilic or hydrophobic, and they can change the speed at which charge carriers travel through them. This flexibility allows nanocrystals to be designed and engineered for specific catalytic, optoelectronic or biomedical applications.

Quantifying ligand density

Previous research showed that the size of nanocrystals affects how many surface ligands can attach to them. The curvature of the crystals can also have an effect. The new work adds to this body of research by exploring the role of nanocrystal shape in more detail.

In their experiments, Jia and colleagues measured the density of OLA ligands on ZnS nanocrystals using three techniques: thermogravimetric analysis-differential scanning calorimetry; 1H nuclear magnetic resonance spectroscopy; and inductively-coupled plasma-optical emission spectrometry. They combined these measurements with semi-empirical molecular dynamics simulations.

The experiments, which are detailed in the Journal of the American Chemical Society, revealed that Zn nanoplatelets with flat basal planes and uniform surfaces allow more ligands to attach tightly to them. This is because the ligands can stack in a parallel fashion on the nanoplatelets, whereas such tight stacking is more difficult on Zn nanodots and nanorods due to staggered atomic arrangements and multistep on their surfaces, Jia tells Physics World. “This results in a lower ligand density than on nanoplatelets,” he says.

The Curtin researchers now plan to study how the differently-shaped nanocrystals – spherical dots, rods and platelets – enter biological cells. This study will be important for improving the efficacy of targeted drug delivery.

The post Nanocrystal shape affects molecular binding appeared first on Physics World.

]]>
Research update Flatter nanocrystals are unexpectedly good at getting ligands to attach to them, which could have applications in electronics and biomedical science https://physicsworld.com/wp-content/uploads/2024/11/nanoplatelets.jpg
Mysterious brown dwarf is two objects, not one https://physicsworld.com/a/mysterious-brown-dwarf-is-two-objects-not-one/ Fri, 01 Nov 2024 10:45:09 +0000 https://physicsworld.com/?p=117810 Binary nature of Gliese 229 B explains its lacklustre brightness

The post Mysterious brown dwarf is two objects, not one appeared first on Physics World.

]]>
Two independent studies suggest that the brown dwarf Gliese 229 B is not a single object, but rather a pair of brown dwarfs. The two teams reached this conclusion in different ways, with one using a combination of instruments at the European Southern Observatory’s Very Large Telescope (VLT) in Chile, and the other taking advantage of the extreme resolution of the infrared spectra measured by the Keck Observatory in Hawaii.

With masses between those of gas-giant planets and stars, brown dwarfs are too small to reach the extreme temperatures and pressures required to fuse hydrogen in their cores. Instead, a brown dwarf glows as it radiates heat accumulated during the gravitational collapse of its formation. While brown dwarfs are much dimmer than stars, their brightness increases with mass – much like stars.

In 1994, the first brown dwarf ever to be confirmed was spotted in orbit around a red dwarf star. Dubbed Gliese 229 B, the brown dwarf has a methane-rich atmosphere remarkably similar to Jupiter’s – and this was the first planet-like atmosphere observed outside the solar system. The discovery was especially important since it would help astronomers to gain deeper insights into the formation and evolution of massive exoplanets.

Decades-long mystery

Since the discovery, extensive astrometry and radial velocity measurements have tracked Gliese 229B’s gravitational influence on its host star – allowing astronomers to constrain its mass to 71 Jupiter masses. But, this mass seemed too high and sparked a decades-long astronomical mystery.

“This value didn’t make any sense, since a brown dwarf of that mass would be much brighter than Gliese 229 B. Therefore, astronomers got worried that our models of stars and brown dwarfs might be missing something big,” explains Jerry Xuan at the California Institute of Technology (Caltech), who led the international collaboration responsible for one of the studies. Xuan’s team also included Rebecca Oppenheimer – who was part of the team that first discovered Gliese 229 B as a PhD student at Caltech.

Xuan’s team investigated the mass–brightness mystery using separate measurements from two cutting-edge instruments at the VLT: CRIRES+, which is a high-resolution infrared spectrograph and the GRAVITY interferometer.

“CRIRES+ disentangles light from two objects by dispersing it at high spectral resolution, whereas GRAVITY combines light from four different eight metre telescopes to see much finer spatial details than previous instruments can resolve,” Xuan explains. “GRAVITY interferes light from all four of these telescopes to enhance the spatial resolution.”

Time-varying shifts

Meanwhile, a team of US astronomers led by Samuel Whitebrook at the University of California, Santa Barbara (UCSB), studied Gliese 229 B using the Near-Infrared Spectrograph (NIRSPEC) at the Keck Observatory in Hawaii. The extreme resolution of this instrument allowed them to measure time-varying shifts in the brown dwarf’s spectrum, which could hint at an as-yet unforeseen gravitational influence on its orbit.

Within GRAVITY’s combined observations, Xuan’s team discovered that Gliese 229 B was not a single object, but a pair of brown dwarfs that are separated by just 16 Earth–Moon distances and orbit each other every 12 days.

And, after fitting CRIRES+’s data to existing brown dwarf models, they detected features within Gliese 229 B’s spectrum that clearly indicated the presence of two different atmospheres.

Frequency shifts

Whitebrook’s team came to a very similar conclusion. Measuring the brown dwarf’s infrared spectrum at different epochs, they identified frequency shifts which had not shown up in previous measurements. Again, these discrepancies clearly hinted at the presence of a hidden binary companion to Gliese 229B.

The two objects comprising the binary have been named Gliese 229Ba and Gliese 229Bb. Crucially, both of these bodies would be significantly dimmer when compared to a brown dwarf of their combined mass. If the teams’ conclusions are correct, this could finally explain why Gliese 229B is so massive, despite its lacklustre brightness.

The findings also suggest that Gliese 229 B is only the first brown dwarf binary yet discovered. Based on their results, Xuan’s team believe it is likely that binaries of brown dwarfs, and potentially even giant planets like Jupiter, must also exist around other stars. These would provide intriguing targets for future observations.

“Finally, our findings also show how complex and messy the star formation process is,” Xuan says. “We should always be open to surprises, after all, the solar system is only one system in billions of stellar systems in the Milky Way galaxy.”

The Caltech-led team describes its observations in Nature, and the UCSB team in The Astrophysical Journal Letters.

The post Mysterious brown dwarf is two objects, not one appeared first on Physics World.

]]>
Research update Binary nature of Gliese 229 B explains its lacklustre brightness https://physicsworld.com/wp-content/uploads/2024/11/1-11-2024-binary-brown-dwarf.jpg newsletter1
A PhD in cups of espresso: how logging my coffee consumption helped me write my thesis https://physicsworld.com/a/a-phd-in-cups-of-espresso-how-logging-my-coffee-consumption-helped-me-write-my-thesis/ Fri, 01 Nov 2024 09:20:48 +0000 https://physicsworld.com/?p=117675 Vittorio Aita explains an unusual side project that kept him on track during the last months of his PhD

The post A PhD in cups of espresso: how logging my coffee consumption helped me write my thesis appeared first on Physics World.

]]>
Every PhD student has been warned at least once that doing a PhD is stressful, and that writing a thesis can make you thoroughly fed up, even if you’re working on a topic you’re passionate about.

When I was coming to the end of my PhD, this thought began to haunt me. I was enjoying my research on the interaction between light and plasmonic metamaterials, but I worried that the stress of writing my thesis would spoil it for me. Perhaps guided by this fear, I started logging my writing activity in a spreadsheet. I recorded how many hours per day I spent writing and how many pages and figures I had completed at the end of each day.

The immediate benefit was that the spreadsheet granted me a quick answer when, once a week, my supervisor asked me the deeply feared question: “So, how many pages?” Probably to his great surprise, my first answer was “Nine cups of espresso.”

In Naples, Italy, we have a relationship with coffee that borders on religious

The idea of logging my writing activity probably came from my background as an experimental physicist, but the use of espresso cups as a unit goes back to my roots in Naples, Italy. There, we have a relationship with coffee that borders on religious. And so, in a difficult time, I turned to the divine and found my strength in the consumption of coffee.

Graph showing PhD progress against

As well as tracking my writing, I also recorded the number of cups of espresso I drank each day. The data I gathered, which is summarized in the above graph, turned out to be quite insightful. Let’s get scientific:

I began writing my thesis on 27 April 2023. As shown by the spacing between entries in the following days, I started at a slow pace, dedicating myself to writing for only two days a week and consuming an average of three units of coffee per day. I should add that it was quite easy to “write” 16 pages on the first day because at the start of the process, you get a lot of pages free. Don’t underestimate the joy of realizing you’ve written 16 pages at once, even if those are just the table of contents and other placeholders.

In the second half of May, there was a sudden, two-unit increase in daily coffee consumption, with a corresponding increase in the number of pages written. Clearly by the sixth entry of my log, I was starting to feel like I wasn’t writing enough. This called for more coffee, and my productivity consequently peaked at seven pages in one day. By the end of May, I had already written almost 80 pages.

Readers with an eye for detail will also notice that on the second to last day of May, coffee consumption is not expressed as an integer. To explain this, I must refer again to my Italian background. Although I chose to define the unit of coffee by volume – a unit of espresso is the amount obtained from a reusable capsule, the half-integer value is representative of the importance of the quality of the grind. I had been offered a filtered coffee that my espresso-based cultural heritage could not consider worth a whole unit. Apologies to filter coffee drinkers.

From looking at the graph entries between the end of May and the middle of August, you would be forgiven for thinking that I took a holiday, despite my looming deadline. You would however be wrong. My summer break from the thesis was spent working on a paper.

However, in the last months of work, my slow-paced rhythm was replaced by a full-time commitment to my thesis. Days of intense writing (and figure-making!) were interspersed with final efforts to gather new data in the lab.

In October some photons from the end of the tunnel started to be detectable, but at this point I unfortunately caught COVID-19. As you can tell from the graph, in the last weeks of writing I worked overtime to get back on track. This necessitated a sudden increase in coffee units: having one more unit of coffee each day got me through a week of very long working days, peaking at a single day of 16 hours of work and 6 cups of espresso.

I felt suddenly lighter and I was filled with a deep feeling of fulfilment

I finally submitted my thesis on 20 December, and I did it with one of the most important people in my life at my side: my grandma. I clicked “send” and hugged her for as long as we both could breathe. I felt suddenly lighter and I was filled with a deep feeling of fulfilment. I had totalled 304 hours of writing, 199 pages and an impressive 180 cups of espresso.

With hindsight, this experience taught me that the silly and funny task of logging how much coffee I drank was in fact a powerful tool that stopped me from getting fed up with writing.

More often than not, I would observe the log after a day of what felt like slow progress and realize that I had achieved more than I thought. On other days, when I was disappointed with the number of pages I had written (once even logging a negative number), the amount of coffee I had consumed would remind me of how challenging they had been to complete.

Doing a PhD can be an emotional experience, particularly when writing up the thesis: the self-realization, the pride, the constant need to improve your work, and the desire to convey the spark and pull of curiosity that first motivated you. This must all be done in a way that is both enjoyable to read and sufficiently technical.

All of this can get frustrating, but I hope sharing this will help future students embrace the road to achieving a PhD. Don’t take yourself too seriously and keep looking for the fun in what you do.

The post A PhD in cups of espresso: how logging my coffee consumption helped me write my thesis appeared first on Physics World.

]]>
Opinion and reviews Vittorio Aita explains an unusual side project that kept him on track during the last months of his PhD https://physicsworld.com/wp-content/uploads/2024/11/2024-11-CAR-espresso-coffee-2151207811-iStock_Mailk-Evren.jpg newsletter
Peter Hirst: MIT Sloan Executive Education develops leadership skills in STEM employees https://physicsworld.com/a/peter-hirst-mit-sloan-executive-education-develops-leadership-skills-in-stem-employees/ Thu, 31 Oct 2024 16:14:26 +0000 https://physicsworld.com/?p=117799 This podcast is sponsored by MIT Sloan School of Management

The post Peter Hirst: MIT Sloan Executive Education develops leadership skills in STEM employees appeared first on Physics World.

]]>
Physicists and others with STEM backgrounds are sought after in industry for their analytical skills. However, traditional training in STEM subjects is often lacking when it comes to nurturing the soft skills that are needed to succeed in managerial and leadership positions.

Our guest in this podcast is Peter Hirst, who is Senior Associate Dean, Executive Education at the MIT Sloan School of Management. He explains how MIT Sloan works with executives to ensure that they efficiently and effectively acquire the skills and knowledge needed to be effective leaders.

This podcast is sponsored by the MIT Sloan School of Management

The post Peter Hirst: MIT Sloan Executive Education develops leadership skills in STEM employees appeared first on Physics World.

]]>
Podcasts This podcast is sponsored by MIT Sloan School of Management https://physicsworld.com/wp-content/uploads/2024/10/finance-and-stats-on-laptop-1148413487-iStock_Pinkypills.jpg newsletter
Bursts of embers play outsized role in wildfire spread, say physicists https://physicsworld.com/a/bursts-of-embers-play-outsized-role-in-wildfire-spread-say-physicists/ Thu, 31 Oct 2024 13:00:29 +0000 https://physicsworld.com/?p=117782 Experiments on tracking firebrands could improve predictions of spot-fire risks

The post Bursts of embers play outsized role in wildfire spread, say physicists appeared first on Physics World.

]]>
New field experiments carried out by physicists in California’s Sierra Nevada mountains suggest that intermittent bursts of embers play an unexpectedly large role in the spread of wildfires, calling into question some aspects of previous fire models. While this is not the first study to highlight the importance of embers, it does indicate that standard modelling tools used to predict wildfire spread may need to be modified to account for these rare but high-impact events.

Embers form during a wildfire due to a combination of heat, wind and flames. Once lofted into the air, they can travel long distances and may trigger new “spot fires” when they land. Understanding ember behaviour is therefore important for predicting how a wildfire will spread and helping emergency services limit infrastructure damage and prevent loss of life.

Watching it burn

In their field experiments, Tirtha Banerjee and colleagues at the University of California Irvine built a “pile fire” – essentially a bonfire fuelled by a representative mixture of needles, branches, pinecones and pieces of wood from ponderosa pine and Douglas fir trees – in the foothills of the Sierra Nevada mountains. A high-frequency (120 frames per second) camera recorded the fire’s behaviour for 20 minutes, and the researchers placed aluminium baking trays around it to collect the embers it ejected.

After they extinguished the pile fire, the researchers brought the ember samples back to the laboratory and measured their size, shape and density. Footage from the camera enabled them to estimate the fire’s intensity based on its height. They also used a technique called particle tracking velocimetry to follow firebrands and calculate their trajectories, velocities and accelerations.

Highly intermittent ember generation

Based on the footage, the team concluded that ember generation is highly intermittent, with occasional bursts containing orders of magnitude more embers than were ejected at baseline. Existing models do not capture such behaviour well, says Alec Petersen, an experimental fluid dynamicist at UC Irvine and lead author of a Physics of Fluids paper on the experiment. In particular, he explains that models with a low computational cost often make simplifications in characterizing embers, especially with regards to fire plumes and ember shapes. This means that while they can predict how far an average firebrand with a certain size and shape will travel, the accuracy of those predictions is poor.

“Although we care about the average behaviour, we also want to know more about outliers,” he says. “It only takes a single ember to ignite a spot fire.”

As an example of such an outlier, Petersen notes that sometimes a strong updraft from a fire plume coincides with the fire emitting a large number of embers. Similar phenomena occur in many types of turbulent flows, including atmospheric winds as well as buoyant fire plumes, and they are characterized by statistically infrequent but extreme fluctuations in velocity. While these fluctuations are rare, they could partially explain why the team observed large (>1mm) firebrands travelling further than models predict, he tells Physics World.

This is important, Petersen adds, because large embers are precisely the ones with enough thermal energy to start spot fires. “Given enough chances, even statistically unlikely events can become probable, and we need to take such events into account,” he says.

New models, fresh measurements

The researchers now hope to reformulate operational models to do just this, but they acknowledge that this will be challenging. “Predicting spot fire risk is difficult and we’re only just scratching the surface of what needs to be included for accurate and useful predictions that can help first responders,” Petersen says.

They also plan to do more experiments in conjunction with a consortium of fire researchers that Banerjee set up. Beginning in November, when temperatures in California are cooler and the wildfire risk is lower, members of the new iFirenet consortium plan to collaborate on a large-scale field campaign at the UC Berkeley Research Forests. “We’ll have tonnes of research groups out there, measuring all sorts of parameters for our various projects,” Petersen says. “We’ll be trying to refine our firebrand tracking experiments too, using multiple cameras to track them in 3D, hopefully supplemented with a thermal camera to measure their temperatures.

“My background is in measuring and describing the complex dynamics of particles carried by turbulent flows,” Petersen continues. “I don’t have the same deep expertise studying fires that I do in experimental fluid dynamics, so it’s always a challenge to learn the best practices of a new field and to familiarize yourself with the great research folks have done in the past and are doing now. But that’s what makes studying fluid dynamics so satisfying – it touches so many corners of our society and world, there’s always something new to learn.”

The post Bursts of embers play outsized role in wildfire spread, say physicists appeared first on Physics World.

]]>
Research update Experiments on tracking firebrands could improve predictions of spot-fire risks https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_TestFire.jpg
IHEP-SDU in search of ‘quantum advantage’ to open new frontiers in high-energy physics https://physicsworld.com/a/ihep-sdu-in-search-of-quantum-advantage-to-open-new-frontiers-in-high-energy-physics/ Thu, 31 Oct 2024 10:05:25 +0000 https://physicsworld.com/?p=117710 Opportunities in quantum science and technology a high priority for China’s high-energy physicists

The post IHEP-SDU in search of ‘quantum advantage’ to open new frontiers in high-energy physics appeared first on Physics World.

]]>
The particle physics community is in the vanguard of a global effort to realize the potential of quantum computing hardware and software for all manner of hitherto intractable research problems across the natural sciences. The end-game? A paradigm shift – dubbed “quantum advantage” – where calculations that are unattainable or extremely expensive on classical machines become possible, and practical, with quantum computers.

A case study in this regard is the Institute of High Energy Physics (IHEP), the largest basic science laboratory in China and part of the Chinese Academy of Sciences. Headquartered in Beijing, IHEP hosts a multidisciplinary scientific programme spanning elementary particle physics, astrophysics as well as the planning, design and construction of large-scale accelerator projects – among them the China Spallation Neutron Source, which launched in 2018, and the High Energy Photon Source, due to come online in 2025.

Quantum opportunity

Notwithstanding its ongoing investment in experimental infrastructure, IHEP is increasingly turning its attention to the application of quantum computing and quantum machine-learning technologies to accelerate research discovery. In short, exploring use-cases in theoretical and experimental particle physics where quantum approaches promise game-changing scientific breakthroughs. A core partner in this endeavour is Shandong University (SDU) Institute of Frontier and Interdisciplinary Science, home to another of China’s top-tier research programmes in high-energy physics (HEP).

With senior backing from Weidong Li and Xingtao Huang – physics professors at IHEP and SDU, respectively – the two laboratories began collaborating on the applications of quantum science and technology in summer 2022. This was followed by the establishment of a joint working group 12 months later. Operationally, the Quantum Computing for Simulation and Reconstruction (QC4SimRec) initiative comprises eight faculty members (drawn from both institutes) and is supported by a multidisciplinary team of two postdoctoral scientists and five PhD students.

Hideki Okawa

“QC4SimRec is part of IHEP’s at-scale quantum computing effort, tapping into cutting-edge resource and capability from a network of academic and industry partners across China,” explains Hideki Okawa, a professor who heads up quantum applications research at IHEP (as well as co-chairing QC4SimRec alongside Teng Li, an associate professor in SDU’s Institute of Frontier and Interdisciplinary Science). “The partnership with SDU is a logical progression,” he adds, “building on a track-record of successful collaboration between the two centres in areas like high-performance computing, offline software and machine-learning applications for a variety of HEP experiments.”

Right now, Okawa, Teng Li and the QC4SimRec team are set on expanding the scope of their joint research activity. One principal line of enquiry focuses on detector simulation – i.e. simulating the particle shower development in the calorimeter, which is one of the most demanding tasks for the central processing unit (CPU) in collider experiments. Other early-stage applications include particle tracking, particle identification, and analysis of the fundamental physics of particle dynamics and collision.

“Working together in QC4SimRec,” explains Okawa, “IHEP and SDU are intent on creating a global player in the application of quantum computing and quantum machine-learning to HEP problems.”

Sustained scientific impact, of course, is contingent on recruiting the brightest and best talent in quantum hardware and software, with IHEP’s near-term focus directed towards engaging early-career scientists, whether from domestic or international institutions. “IHEP is very supportive in this regard,” adds Okawa, “and provides free Chinese language courses to fast-track the integration of international scientists. It also helps that our bi-weekly QC4SimRec working group meetings are held in English.”

A high-energy partnership

Around 700 km south-east of Beijing, the QC4SimRec research effort at SDU is overseen by Xingtao Huang, dean of the university’s Institute of Frontier and Interdisciplinary Science and an internationally recognized expert in machine-learning technologies and offline software for data processing and analysis in particle physics.

“There’s huge potential upside for quantum technologies in HEP,” he explains. In the next few years, for example, QC4SimRec will apply innovative quantum approaches to build on SDU’s pre-existing interdisciplinary collaborations with IHEP across a range of HEP initiatives – including the Beijing Spectrometer III (BESIII), the Jiangmen Underground Neutrino Observatory (JUNO) and the Circular Electron-Positron Collider (CEPC).

Jiangmen Underground Neutrino Observatory

One early-stage QC4SimRec project evaluated quantum machine-learning techniques for the identification and discrimination of muon and pion particles within the BESIII detector. Comparison with traditional machine-learning approaches shows equivalent performance on the same datasets and, by extension, the feasibility of applying quantum machine-learning to data analysis in next-generation collider experiments.

“This is a significant result,” explains Huang, “not least because particle identification – the identification of charged-particle species in the detector – is one of the biggest challenges in HEP experiments.”

Xingtao Huang

Huang is currently seeking to recruit senior-level scientists with quantum and HEP expertise from Europe and North America, building on a well-established faculty team of 48 staff members (32 of them full professors) working on HEP. “We have several open faculty positions at SDU in quantum computing and quantum machine-learning,” he notes. “We’re also interested in recruiting talented postdoctoral researchers with quantum know-how.”

As a signal of intent, and to raise awareness of SDU’s global ambitions in quantum science and technology, Huang and colleagues hosted a three-day workshop (co-chaired by IHEP) last summer to promote the applications of quantum computing and classical/quantum machine-learning in particle physics. With over 100 attendees and speakers attending the inaugural event, including several prominent international participants, a successful follow-on workshop was held in Changchun earlier this year, with planning well under way for the next instalment in 2025.

Along a related coordinate, SDU has launched a series of online tutorials to support aspiring Masters and PhD students keen to further their studies in the applications of quantum computing and quantum machine-learning within HEP.

“Quantum computing is a hot topic, but there’s still a relatively small community of scientists and engineers working on HEP applications,” concludes Huang. “Working together, IHEP and SDU are building the interdisciplinary capacity in quantum science and technology to accelerate frontier research in particle physics. Our long-term goal is to establish a joint national laboratory with dedicated quantum computing facilities across both campuses.”

One thing is clear: the QC4SimRec collaboration offers ambitious quantum scientists a unique opportunity to progress alongside China’s burgeoning quantum ecosystem – an industry, moreover, that’s being heavily backed by sustained public and private investment. “For researchers who want to be at the cutting edge in quantum science and HEP, China is as good a place as any,” Okawa concludes.

Quantum machine-learning for accelerated discovery

To understand the potential for quantum advantage in specific HEP contexts, QC4SimRec scientists are currently working on “rediscovering” the exotic particle Zc(3900) using quantum machine-learning techniques.

In terms of the back-story: Zc(3900) is an exotic subatomic particle made up of quarks (the building blocks of protons and neutrons) and believed to be the first tetraquark state observed experimentally – an observation that, in the process, deepened our understanding of quantum chromodynamics (QCD). The particle was discovered in 2013 using the BESIII detector at the Beijing Electron-Positron Collider (BEPCII), with independent observation by the Belle experiment at Japan’s KEK particle physics laboratory.

As part of their study, the IHEP- SDU team deployed the so-called Quantum Support Vector Machine algorithm (a quantum variant of a classical algorithm) for the training along with simulated signals of Zc(3900) and randomly selected events from the real BESIII data as backgrounds.

Using the quantum machine-learning approach, the performance is competitive versus classical machine-learning systems – though, crucially, with a smaller training dataset and fewer data features. Investigations are ongoing to demonstrate enhanced signal sensitivity with quantum computing – work that could ultimately point the way to the discovery of new exotic particles in future experiments.

IHEP and SDU logos

The post IHEP-SDU in search of ‘quantum advantage’ to open new frontiers in high-energy physics appeared first on Physics World.

]]>
Analysis Opportunities in quantum science and technology a high priority for China’s high-energy physicists https://physicsworld.com/wp-content/uploads/2024/10/WEB-mmexport1729132118505.jpg newsletter
Chip-based optical tweezers manipulate microparticles and cells from a distance https://physicsworld.com/a/chip-based-optical-tweezers-manipulate-microparticles-and-cells-from-a-distance/ Thu, 31 Oct 2024 09:30:09 +0000 https://physicsworld.com/?p=117791 Integrated optical phased array uses a tightly focused beam of light to trap and manipulate biological particles

The post Chip-based optical tweezers manipulate microparticles and cells from a distance appeared first on Physics World.

]]>
Optical traps and tweezers can be used to capture and manipulate particles using non-contact forces. A focused beam of light allows precise control over the position of and force applied to an object, at the micron scale or below, enabling particles to be pulled and captured by the beam.

Optical manipulation techniques are garnering increased interest for biological applications. Researchers from Massachusetts Institute of Technology (MIT) have now developed a miniature, chip-based optical trap that acts as a “tractor beam” for studying DNA, classifying cells and investigating disease mechanisms. The device – which is small enough to fit in your hand – is made from a silicon-photonics chip and can manipulate particles up to 5 mm away from the chip surface, while maintaining a sterile environment for cells.

The promise of integrated optical tweezers

Integrated optical trapping provides a compact route to accessible optical manipulation compared with bulk optical tweezers, and has already been demonstrated using planar waveguides, optical resonators and plasmonic devices. However, many such tweezers can only trap particles directly on (or within several microns of) the chip’s surface and only offer passive trapping.

To make optical traps sterile for cell research, 150-µm thick glass coverslips are required. However, the short focal heights of many integrated optical tweezers means that the light beams can’t penetrate into standard sample chambers. Because such devices can only trap particles a few microns above the chip, they are incompatible with biological research that requires particles and cells to be trapped at much larger distances from the chip’s surface.

With current approaches, the only way to overcome this is to remove the cells and place them on the surface of the chip itself. This process contaminates the chip, however, meaning that each chip must be discarded after use and a new chip used for every experiment.

Trapping device for biological particles

Lead author Tal Sneh and colleagues developed an integrated optical phased array (OPA) that can focus emitted light at a specific point in the radiative near field of the chip. To date, many OPA devices have been motivated by LiDAR and optical communications applications, so their capabilities were limited to steering light beams in the far field using linear phase gradients. However, this approach does not generate the tightly focused beam required for optical trapping.

In their new approach, the MIT researchers used semiconductor manufacturing processes to fabricate a series of micro-antennas onto the chip. By creating specific phase patterns for each antenna, the researchers found that they could generate a tightly focused beam of light.

Each antenna’s optical signal was also tightly controlled by varying the input laser wavelength to provide an active spatial tuning for tweezing particles. The focused light beam emitted by the chip could therefore be shaped and steered to capture particles located millimetres above the surface of the chip, making it suitable for biological studies.

The researchers used the OPA tweezers to optically steer and non-mechanically trap polystyrene microparticles at up to 5 mm above the chip’s surface. They also demonstrated stretching of mouse lymphoblast cells, in the first known cell experiment to use single-beam integrated optical tweezers.

The researchers point out that this is the first demonstration of trapping particles over millimetre ranges, with the operating distance of the new device orders of magnitude greater than other integrated optical tweezers. Plasmonic, waveguide and resonator tweezers, for example, can only operate at 1 µm above the surface, while microlens-based tweezers have been able to operate at 20 µm distances.

Importantly, the device is completely reusable and biocompatible, because the biological samples can be trapped and undergo manipulation while remaining within a sterile coverslip. This ensures that both the biological media and the chip stay free from contamination without needing complex microfluidics packaging.

The work in this study provides a new type of modality for integrated optical tweezers, expanding their use into the biological domain to perform experiments on proteins and DNA, for example, as well as to sort and manipulate cells.

The researchers say that they hope to build on this research by creating a device with an adjustable focal height for the light beam, as well as introduce multiple trap sites to manipulate biological particles in more complex ways and employ the device to examine more biological systems.

The optical trap is described in Nature Communications.

The post Chip-based optical tweezers manipulate microparticles and cells from a distance appeared first on Physics World.

]]>
Research update Integrated optical phased array uses a tightly focused beam of light to trap and manipulate biological particles https://physicsworld.com/wp-content/uploads/2024/10/31-10-24-MIT_Tractor-Beam.jpg
AI enters the fold with the 2024 Nobel Prize for Physics: https://physicsworld.com/a/ai-enters-the-fold-with-the-2024-nobel-prize-for-physics/ Wed, 30 Oct 2024 16:19:25 +0000 https://physicsworld.com/?p=117696 Matin Durrani is pleased that the 2024 Nobel Prize for Physics brings AI under physicists' wing

The post AI enters the fold with the 2024 Nobel Prize for Physics: appeared first on Physics World.

]]>
I’ll admit that this year’s Nobel Prize for Physics took us here at Physics World by surprise. Trying to guess who might win a Nobel is always a mug’s game but with condensed-matter physics having missed out since 2016, our money was on research into, say, metamaterials or twisted graphene winning. We certainly weren’t expecting machine learning and artificial intelligence (AI) to come up trumps.

Machine learning these days has a huge influence in physics, where it’s used in everything from the very practical (designing new circuits for quantum optics experiments) to the esoteric (finding new symmetries in data from the Large Hadron Collider). But it would be wrong to think that machine learning itself isn’t physics or that the Nobel committee – in honouring John Hopfield and Geoffrey Hinton – has been misguidedly seduced by some kind of “AI hype”.

Hopfield, 91, is a fully fledged condensed-matter physicist, who in the 1970s began to study the dynamics of biochemical reactions and its applications in neuroscience. In particular, he showed that the physics of spin glasses can be used to build networks of neurons to store and retrieve information. Hopfield applied his work to the problem of “associative memories” – how hearing a fragment of a song, say, can unlock a memory of the occasion we first heard it.

His work on the statistical physics and training of these “Hopfield networks” – and Hinton’s later on “Boltzmann machines” – paved the way for modern-day AI. Indeed, Hinton, a computer scientist, is often dubbed “the godfather of AI”. On the Physics World Weekly podcast, Anil Ananthaswamy – author of Why Machines Learn: the Elegant Maths Behind Modern AI – said Hinton’s contributions to AI were “immense”.

Of course, machine learning and AI are multidisciplinary endeavours, drawing on not just physics and mathematics, but neuroscience, computer science and cognitive science too. Imagine though, if Hinton and Hopfield had been given, say, a medicine Nobel prize. We’d have physicists moaning they’d been overlooked. Some might even say that this year’s Nobel Prize for Chemistry, which went to the application of AI to protein-folding, is really physics at heart.

We’re still in the early days for AI, which has its dangers. Indeed, Hinton quit Google last year so he could more freely express his concerns. But as this year’s Nobel prize makes clear, physics isn’t just drawing on machine learning and AI – it paved the way for these fields too.

The post AI enters the fold with the 2024 Nobel Prize for Physics: appeared first on Physics World.

]]>
Blog Matin Durrani is pleased that the 2024 Nobel Prize for Physics brings AI under physicists' wing https://physicsworld.com/wp-content/uploads/2024/10/brain-computer-intelligence-concept-landscape-1027941874-Shutterstock_Jackie-Niam.jpg
Two distinct descriptions of nuclei unified for the first time https://physicsworld.com/a/two-distinct-descriptions-of-nuclei-unified-for-the-first-time/ Wed, 30 Oct 2024 14:47:58 +0000 https://physicsworld.com/?p=117766 Hybrid approach focuses on short-range-correlated nucleon pairs

The post Two distinct descriptions of nuclei unified for the first time appeared first on Physics World.

]]>
In a new study, an international team of physicists has unified two distinct descriptions of atomic nuclei, taking a major step forward in our understanding of nuclear structure and strong interactions. For the first time, the particle physics perspective – where nuclei are seen as made up of quarks and gluons – has been combined with the traditional nuclear physics view that treats nuclei as collections of interacting nucleons (protons and neutrons). This innovative hybrid approach provides fresh insights into short-range correlated (SRC) nucleon pairs – which are fleeting interactions where two nucleons come exceptionally close and engage in strong interactions for mere femtoseconds. Although these interactions play a crucial role in the structure of nuclei, they have been notoriously difficult to describe theoretically.

“Nuclei (such as gold and lead) are not just a ‘bag of non-interacting protons and neutrons’,” explains Fredrick Olness at Southern Methodist University in the US, who is part of the international team. “When we put 208 protons and neutrons together to make a lead nucleus, they interact via the strong interaction force with their nearest neighbours; specifically, those neighbours within a ‘short range.’ These short-range interactions/correlations modify the composition of the nucleus and are a manifestation of the strong interaction force. An improved understanding of these correlations can provide new insights into both the properties of nuclei and the strong interaction force.”

To investigate the inner structure of atomic nuclei, physicists use parton distribution functions (PDFs). These functions describe how the momentum and energy of quarks and gluons are distributed within protons, neutrons, or entire nuclei. PDFs are typically obtained from high-energy experiments, such as those performed at particle accelerators, where nucleons or nuclei collide at close to the speed of light. By analysing the behaviour of the particles produced in these collisions, physicists can gain essential insights into their properties, revealing the complex dynamics of the strong interaction.

Traditional focus

However, traditional nuclear physics often focuses on the interactions between protons and neutrons within the nucleus, without delving into the quark and gluon structure of nucleons. Until now, these two approaches – one based on fundamental particles and the other on nuclear dynamics — remained separate. Now researchers in the US, Germany, Poland, Finland, Australia, Israel and France have bridged this gap.

The team developed a unified framework that integrates both the partonic structure of nucleons and the interactions between nucleons in atomic nuclei. This approach is particularly useful for studying SRC nucleon pairs, whose interactions have long been recognized as crucial to understanding the structure of nuclei, but they have been notoriously difficult to describe using conventional theoretical models.

By combining particle and nuclear physics descriptions, the researchers were able to derive PDFs for SRC pairs, providing a detailed understanding of how quarks and gluons behave within these pairs.

“This framework allows us to make direct relations between the quark–gluon and the proton–neutron description of nuclei,” said Olness. “Thus, for the first time, we can begin to relate the general properties of nuclei (such as ‘magic number’ nuclei – those with a specific number of protons or neutrons that make them particularly stable – or ‘mirror nuclei’ with equal numbers of protons and neutrons) to the characteristics of the quarks and gluons inside the nuclei.”

Experimental data

The researchers applied their model to experimental data from scattering experiments involving 19 different nuclei, ranging from helium-3 (with two protons and one neutron) to lead-208 (with 208 protons and neutrons). By comparing their predictions with the experimental data, they were able to refine their model and confirm its accuracy.

The results showed a remarkable agreement between the theoretical predictions and the data, particularly when it came to estimating the fraction of nucleons that form SRC pairs. In light nuclei, such as helium, nucleons rarely form SRC pairs. However, in heavier nuclei like lead, nearly half of the nucleons participate in SRC pairs, highlighting the significant role these interactions play in shaping the structure of larger nuclei.

These findings not only validate the team’s approach but also open up new avenues for research.

“We can study what other nuclear characteristics might yield modifications of the short-ranged correlated pairs ratios,” explains Olness. “This connects us to the shell model of the nucleus and other theoretical nuclear models. With the new relations provided by our framework, we can directly relate elemental quantities described by nuclear physics to the fundamental quarks and gluons as governed by the strong interaction force.”

The new model can be further tested using data from future experiments, such as those planned at the Jefferson Lab and at the Electron–Ion Collider at Brookhaven National Laboratory. These facilities will allow scientists to probe quark–gluon dynamics within nuclei with even greater precision, providing an opportunity to validate the predictions made in this study.

The research is described in Physical Review Letters.

The post Two distinct descriptions of nuclei unified for the first time appeared first on Physics World.

]]>
Research update Hybrid approach focuses on short-range-correlated nucleon pairs https://physicsworld.com/wp-content/uploads/2024/10/30-10-2024-particle-nuclear-illustration.jpg newsletter1
Reanimating the ‘living Earth’ concept for a more cynical world https://physicsworld.com/a/reanimating-the-living-earth-concept-for-a-more-cynical-world/ Wed, 30 Oct 2024 13:30:24 +0000 https://physicsworld.com/?p=117485 James Dacey reviews Becoming Earth: How Our Planet Came to Life by Ferris Jabr

The post Reanimating the ‘living Earth’ concept for a more cynical world appeared first on Physics World.

]]>
Tie-dye, geopolitical tension and a digitized Abba back on stage. Our appetite for revisiting the 1970s shows no signs of waning. Science writer Ferris Jabr has now reanimated another idea that captured the era’s zeitgeist: the concept of a “living Earth”. In Becoming Earth: How Our Planet Came to Life Jabr makes the case that our planet is far more than a lump of rock that passively hosts complex life. Instead, he argues that the Earth and life have co-evolved over geological time and that appreciating these synchronies can help us to steer away from environmental breakdown.

“We, and all living things, are more than inhabitants of Earth – we are Earth, an outgrowth of its structure and an engine of its evolution.” If that sounds like something you might hear in the early hours at a stone circle gathering, don’t worry. Jabr fleshes out his case with the latest science and journalistic flair in what is an impressive debut from the Oregon-based writer.

Becoming Earth is a reappraisal of the Gaia hypothesis, proposed in 1972 by British scientist James Lovelock and co-developed over several decades by US microbiologist Lynn Margulis. This idea of the Earth functioning as a self-regulating living organism has faced scepticism over the years, with many feeling it is untestable and strays into the realm of pseudoscience. In a 1988 essay, the biologist and science historian Stephen Jay Gould called Gaia “a metaphor, not a mechanism”.

Though undoubtedly a prodigious intellect, Lovelock was not your typical academic. He worked independently across fields including medical research, inventing the electron capture detector and consulting for petrochemical giant Shell. Add that to Gaia’s hippyish name – evoking the Greek goddess of Earth – and it’s easy to see why the theory faced a branding issue within mainstream science. Lovelock himself acknowledged errors in the theory’s original wording, which implied the biosphere acted with intention.

Though he makes due reference to the Gaia hypothesis, Jabr’s book is a standalone work, and in revisiting the concept in 2024, he has one significant advantage: we now have a tonne of scientific evidence for tight coupling between life and the environment. For instance, microbiologists increasingly speak of soil as a living organism because of the interconnections between micro-organisms and soil’s structure and function. Physicists meanwhile happily speak of “complex systems” where collective behaviour emerges from interactions of numerous components – climate being the obvious example.

To simplify this sprawling topic, Becoming Earth is structured into three parts: Rock, Water and Air. Accessible scientific discussions are interspersed with reportage, based on Jabr’s visits to various research sites. We kick off at the Sanford Underground Research Facility in South Dakota (also home to neutrino experiments) as Jabr descends 1500 m in search of iron-loving microbes. We learn that perhaps 90% of all microbes live deep underground and they transform Earth wherever they appear, carving vast caverns and regulating the global cycling of carbon and nutrients. Crucially, microbes also created the conditions for complex life by oxygenating the atmosphere.

In the Air section, Jabr scales the 1500 narrow steps of the Amazon Tall Tower Observatory to observe the forest making its own rain. Plants are constantly releasing water into the air through their leaves, and this drives more than half of the 20 billion tonnes of rain that fall on its canopy daily – more than the volume discharged by the Amazon river. “It’s not that Earth is a single living organism in exactly the same way as a bird or bacterium, or even a superorganism akin to an ant colony,” explains Jabr. “Rather that the planet is the largest known living system – the confluence of all other ecosystems – with structures, rhythms, and self-regulating processes that resemble those of its smaller constituent life forms. Life rhymes at every scale.”

When it comes to life’s capacity to alter its environment, not all creatures are born equal. Humans are having a supersized influence on these planetary rhythms despite appearing in recent geological history. Jabr suggests the Anthropocene – a proposed epoch defined by humanity’s influence on the planet – may have started between 50,000 and 10,000 years ago. At that time, our ancestors hunted mammoths and other megafauna into extinction, altering grassland habitats that had preserved a relatively cool climate.

Some of the most powerful passages in Becoming Earth concern our relationship with hydrocarbons. “Fossil fuel is essentially an ecosystem in an urn,” writes Jabr to illustrate why coal and oil store such vast amounts of energy. Elsewhere, on a beach in Hawaii an earth scientist and artist scoop up “plastiglomerates” – rocks formed from the eroded remains of plastic pollution fused with natural sediments. Humans have “forged a material that had never existed before”.

A criticism of the original Gaia hypothesis is that its association with a self-regulating planet may have fuelled a type of climate denialism. Science historian Leah Aronowsky argued that Gaia created the conditions for people to deny humans’ unique capacity to tip the system.

Jabr doesn’t see it that way and is deeply concerned that we are hastening the end of a stable period for life on Earth. But he also suggests we have the tools to mitigate the worst impacts, though this will likely require far more than just cutting emissions. He visits the Orca project in Iceland, the world’s first and largest plant for removing carbon from the atmosphere and storing it over long periods – in this case injecting it into basalt deep below the surface.

In an epilogue, we finally meet a 100-year-old James Lovelock at his Dorset home three years before his death in 2022. Still cheerful and articulate, Lovelock thrived on humour and tackling the big questions. As pointed out by Jabr, Lovelock was also prone to contradiction and the occasional alarmist statement. For instance, in his 2006 book The Revenge of Gaia he claimed that the only few breeding humans left by the end of the century would be confined to the Arctic. Fingers crossed he’s wrong on that one!

Perhaps Lovelock was prone to the same phenomenon we see in quantum physics where even the sharpest scientific minds can end up shrouding the research in hype and woo. Once you strip away the new-ageyness, we may find that the idea of Gaia was never as “out there” as the cultural noise that surrounded it. Thanks to Jabr’s earnest approach, the living Earth concept is alive and kicking in 2024.

The post Reanimating the ‘living Earth’ concept for a more cynical world appeared first on Physics World.

]]>
Opinion and reviews James Dacey reviews Becoming Earth: How Our Planet Came to Life by Ferris Jabr https://physicsworld.com/wp-content/uploads/2024/10/2024-10-Dacey-Amazon.jpg newsletter
Superconductivity theorist Leon Cooper dies aged 94 https://physicsworld.com/a/superconductivity-theorist-leon-cooper-dies-aged-94/ Tue, 29 Oct 2024 14:09:20 +0000 https://physicsworld.com/?p=117733 Cooper carried out research into superconductivity and neuroscience

The post Superconductivity theorist Leon Cooper dies aged 94 appeared first on Physics World.

]]>
The US condensed-matter physicist Leon Cooper, who shared the 1972 Nobel Prize for Physics, has died at the age of 94. In the late 1950s, Cooper, together with his colleagues Robert Schrieffer and John Bardeen, developed a theory of superconductivity that could explain why certain materials undergo an absolute absence of electrical resistance at low temperatures.

Born on 28 February 1930 in New York City, US, Cooper graduated from the Bronx High School of Science in 1947 before earning a degree from Columbia University, which he completed in 1951, and then a PhD in 1954.

Cooper then spent time at the Institute for Advanced Study in Princeton, the University of Illinois and Ohio State University before heading to Brown University in 1958 where he remained for the rest of his career.

It was in Illinois that Cooper began to work on a theoretical explanation of superconductivity – a phenomenon that was first seen by the Dutch physicist Heike Kamerlingh Onnes when he discovered in 1911 that the electrical resistance of mercury suddenly disappeared beneath a temperature of 4.2 K.

However, there was no microscopic theory of superconductivity until 1957, when Bardeen, Cooper and Schrieffer – all based at Illinois – came up with their “BCS” theory. This described how an electron can deform the atomic lattice through which it moves, thereby pairing with a neighbouring electron, which became known as a Cooper pair. Being paired allows all the electrons in a superconductor to move as a single cohort, known as a condensate, prevailing over thermal fluctuations that could cause the pairs to break.

Bardeen, Cooper and Schrieffer published their BCS theory in April 1957 (Phys. Rev. 106 162), which was then followed in December by a full-length paper (Phys. Rev. 108 1175). Cooper was in his late 20s when he made the breakthrough.

Not only did the BCS theory of superconductivity successfully account for the behaviour of “conventional” low-temperature superconductors such as mercury and tin but it also had application in particle physics by contributing to the notion of spontaneous symmetry breaking.

For their work the trio won the 1972 Nobel Prize for Physics “for their jointly developed theory of superconductivity, usually called the BCS-theory”.

From BCS to BCM

While Cooper continued to work in superconductivity, later in his career he turned to neuroscience. In 1973 he founded and directed Brown’s Institute for Brain and Neural Systems, which studied animal nervous systems and the human brain. In the 1980s he came up with a physical theory of learning in the visual cortex dubbed the “BCM” theory, named after Cooper and his colleagues Elie Bienenstock and Paul Munro.

He also founded the technology firm Nestor along with Charles Elbaum, which aimed to find commercial and military applications for artificial neural networks.

As well as the Nobel prize, Cooper was awarded the Comstock Prize from the US National Academy of Sciences in 1968 and the Descartes Medal from the Academie de Paris in 1977.

He also wrote numerous books including An Introduction to the Meaning and Structure of Physics in 1968 and Physics: Structure and Meaning in 1992. More recently, he published Science and Human Experience in 2014.

“Leon’s intellectual curiosity knew no boundaries,” notes Peter Bilderback, who worked with Cooper at Brown. “He was comfortable conversing on any subject, including art, which he loved greatly. He often compared the construction of physics to the building of a great cathedral, both beautiful human achievements accomplished by many hands over many years and perhaps never to be fully finished.”

The post Superconductivity theorist Leon Cooper dies aged 94 appeared first on Physics World.

]]>
News Cooper carried out research into superconductivity and neuroscience https://physicsworld.com/wp-content/uploads/2024/10/Cooper-BCS.jpg newsletter1
From buckyballs to biological membranes: ISIS celebrates 40 years of neutron science https://physicsworld.com/a/from-buckyballs-to-biological-membranes-isis-celebrates-40-years-of-neutron-science/ Tue, 29 Oct 2024 10:00:50 +0000 https://physicsworld.com/?p=117353 As ISIS – the UK’s muon and neutron source – turns 40, Rosie de Laune and colleagues from ISIS explore the past, present and future of neutron scattering

The post From buckyballs to biological membranes: ISIS celebrates 40 years of neutron science appeared first on Physics World.

]]>
When British physicist James Chadwick discovered the neutron in 1932, he supposedly said, “I am afraid neutrons will not be of any use to anyone.” The UK’s neutron user facility – the ISIS Neutron and Muon Source, now operated by the Science and Technology Facilities Council (STFC) – was opened 40 years ago. In that time, the facility has welcomed more than 60,000 scientists from around the world. ISIS supports a global community of neutron-scattering researchers, and the work that has been done there shows that Chadwick couldn’t have been more wrong.

By the time of Chadwick’s discovery, scientists knew that the atom was mostly empty space, and that it contained electrons and protons. However, there were some observations they couldn’t explain, such as the disparity between the mass and charge numbers of the helium nucleus.

The neutron was the missing piece of this puzzle. Chadwick’s work was fundamental to our understanding of the atom, but it also set the stage for a powerful new field of condensed-matter physics. Like other subatomic particles, neutrons have wave-like properties, and their wavelengths are comparable to the spacings between atoms. This means that when neutrons scatter off materials, they create characteristic interference patterns. In addition, because they are electrically neutral, neutrons can probe deeper into materials than X-rays or electrons.

Today, facilities like ISIS use neutron scattering to probe everything from spacecraft components and solar cells to studying how cosmic ray neutrons interact with electronics to ensure the resilience of technology for driverless cars and aircraft.

The origins of neutron scattering

On 2 December 1942 a group of scientists at the University of Chicago in the US, led by Enrico Fermi, watched the world’s first self-sustaining nuclear chain reaction, an event that would reshape world history and usher in a new era of atomic science.

One of those in attendance was Ernest O Wollan, a physicist with a background in X-ray scattering. The neutron’s wave-like properties had been established in 1936 and Wollan recognized that he could use neutrons produced by a nuclear reactor like the one in Chicago to determine the positions of atoms in a crystal. Wollan later moved to Oak Ridge National Laboratory (ORNL) in Tennessee, where a second reactor was being built, and at the end of 1944 his team was able to observe Bragg diffraction of neutrons in sodium chloride and gypsum salts.

A few years later Wollan was joined by Clifford Schull, with whom he refined the technique and constructed the world’s first purpose-built neutron-scattering instrument. Schull won the Nobel Prize for Physics in 1994 for his work (with Bertram Brockhouse, who had pioneered the use of neutron scattering to measure excitations), but Wollan was ineligible because he had died 10 years previously.

The early reactors used for neutron scattering were multipurpose, the first to be designed specifically to produce neutron beams was the High Flux Beam Reactor (HFBR) at Brookhaven National Laboratory in the US in 1965. This was closely followed in 1972 by the Institut Laue–Langevin (ILL) in France, a facility that is still running today.

The first target station at the ISIS Neutron and Muon Source

Rather than using a reactor, ISIS is based on an alternative technology called “spallation” that first emerged in the 1970s. In spallation, neutrons are produced by accelerating protons at a heavy metal target. The protons collide like bullets with the nuclei in the target, absorb the proton and then discharge high-energy particles, including neutrons.

The first such sources specifically designed for neutron scattering were the KENS source at the Institute of Materials Structure Science (IMSS) in Japan, which started operation in 1980, and the Intense Pulsed Neutron Source at the Argonne National Laboratory in the US, which started operation in 1981.

The pioneering development work on these sources and in other institutions was of great benefit during the design and development of what was to become ISIS. The facility was approved in 1977 and the first beam was produced on 16 December 1984. In October 1985 the source was formally named ISIS and opened by then UK prime minister Margaret Thatcher. Today around 20 reactor and spallation neutron sources are operational around the world and one – the European Spallation Source (ESS) – is under construction in Sweden.

The name ISIS was inspired by both the river that flows through Oxford and the Egyptian goddess of reincarnation. The relevance of the latter relates to the fact that ISIS was built on the site of the NIMROD proton synchrotron that operated between 1964 and 1978, reusing much of its infrastructure and components.

Producing neutrons and muons

At the heart of ISIS is an 800 MeV accelerator that produces intense pulses of protons 50 times a second. These pulses are then fired at two tungsten targets. Spallation of the tungsten by the proton beam produces neutrons that fly off in all directions.

Before the neutrons can be used, they must be slowed down, which is achieved by passing them through a material called a “moderator”. ISIS uses various moderators which operate at different temperatures, producing neutrons with varying wavelengths. This enables scientists to probe materials on length scales from fractions of an angstrom to hundreds of nanometres.

Arrayed around the two neutron sources and the moderators are more than 25 beamlines that direct neutrons to one of ISIS’s specialized experiments. Many of these perform neutron diffraction, which is used to study the structure of crystalline and amorphous solids, as well as liquids.

When neutrons scatter, they also transfer a small amount of energy to the material and can excite vibrational modes in atoms and molecules. ISIS has seven beamlines dedicated to measuring this energy transfer, a technique called neutron spectroscopy. This can tell us about atomic and molecular bonds and is also used to study properties like specific heat and resistivity, as well as magnetic interactions.

Neutrons have spin so they are also sensitive to the magnetic properties of materials. Neutron diffraction is used to investigate magnetic ordering such as ferrimagnetism whereas spectroscopy is suited to the study of collective magnetic excitations.

Neutrons can sense short and long-ranged magnetic ordering, but to understand localized effects with small magnetic moments, an alternative probe is needed. Since 1987, ISIS has also produced muon beams, which are used for this purpose, as well as other applications. In front of one of the neutron targets is a carbon foil and when the proton beam passes through this it produces pions, which rapidly decay into muons. Rather than scattering, muons become implanted in the material, where they rapidly decay into positrons. By analysing the decay positrons, scientists can study very weak and fluctuating magnetic fields in materials that may be inaccessible with neutrons. For this reason, muon and neutron techniques are often used together.

“The ISIS instrument suite now provides capability across a broad range of neutron and muon science,” says Roger Eccleston, ISIS director. “We’re constantly engaging our user community, providing feedback and consulting them on plans to develop ISIS. This continues as we begin our ‘Endeavour’ programme: the construction of four new instruments and five significant upgrades to deliver even more performance enhancements.

“ISIS has been a part of my career since I arrived as a placement student shortly before the inauguration. Although I have worked elsewhere, ISIS has always been part of my working life. I have seen many important scientific and technical developments and innovations that kept me inspired to keep coming back.”

Over the last 40 years, the samples studied at ISIS have become smaller and more complex, and measurements have become quicker. The kinetics of chemical reactions can be imaged in real-time, and extreme temperatures and pressures can be achieved. Early work from ISIS focused on physics and chemistry questions such as the properties of high-temperature superconductors, the structure of chemicals and the phase behaviour of water. More recent work includes “seeing” catalysis in real-time, studying biological systems such as bacterial membranes, and enhancing the reliability of circuits for driverless cars.

Understanding the building blocks of life

Unlike X-rays and electrons, neutrons scatter strongly from light nuclei including hydrogen, which means they can be used to study water and organic materials.

Water is the most ubiquitous liquid on the planet, but its molecular structure gives it complex chemical and physical properties. Significant work on the phase behaviour of water was performed at ISIS in the early 2000s by scientists from the UK and Italy, who showed that liquid water under pressure transitions between two distinct structures, one low density and one high density (Phys. Rev. Lett. 84 2881).

A cartoon of the model outer membrane of the bacterium used in ISIS experiments

Water is the molecule of life, and as the technical capabilities of ISIS have advanced, it has become possible to study it inside cells, where it underpins vital functions from protein folding to chemical reactions. In 2023 a team from Portugal used the facilities at ISIS to investigate whether the water inside cells can be used as a biomarker for cancer.

Because it’s confined at the nanoscale, water in a cell will behave quite differently to bulk water. At these scales, water’s properties are highly sensitive to its environment, which changes when a cell becomes cancerous. The team showed that this can be measured with neutron spectroscopy, manifesting as an increased flexibility in the cancerous cells (Scientific Reports 13 21079).

If light is incident on an interface between two materials with different refractive indices it may, if the angle is just right, be perfectly reflected. A similar effect is exhibited by neutrons that are directed at the surface of a material, and neutron reflectometry instruments at ISIS use this to measure the thickness, surface roughness, and chemical composition of thin films.

One recent application of this technique at ISIS was a 2018 project where a team from the UK studied the effect of a powerful “last resort” antibiotic on the outer membrane of a bacterium. This antibiotic is only effective at body temperature, and the researchers show that this is because the thermal motion of molecules in the outer membrane makes it easier for the antibiotic to slip in and disrupt the bacterium’s structure (PNAS 115 E7587).

Exploring the quantum world

A year after ISIS became operational, physicists Georg Bednorz and Karl Alexander Müller, working at the IBM research laboratory in Switzerland, discovered superconductivity in a material at 35 K, 12 K higher than any other known superconductor at the time. This discovery would later win them the 1987 Nobel Prize for Physics.

High-temperature superconductivity was one of the most significant discoveries of the 1980s, and it was a focus of early work at ISIS. Another landmark came in 1987, when yttrium barium copper oxide (YBCO) was found to exhibit superconductivity above 77 K, meaning that instead of liquid helium, it can be cooled to a superconducting state with the much cheaper liquid nitrogen. The structure of this material was first fully characterized at ISIS by a team from the US and UK (Nature 327 310).

Illustration of several arrows floating on liquid

Another example of the quantum systems studied at ISIS is quantum spin liquids (QSLs). Most magnetic materials form an ordered phase like a ferromagnet when cooled, but a QSL is an interacting system of electron spins that is, in theory, disordered even when cooled to absolute zero.

QSLs are of great interest today because they are theorized to exhibit long-range entanglement, which could be applied to quantum computing and communications. QSLs have proven challenging to identify experimentally, but evidence from neutron scattering and muon spectroscopy at ISIS has characterized spin-liquid states in a number of materials (Nature 471 612).

Developing sustainable solutions and new materials

Over the years, experimental set-ups at ISIS have evolved to handle increasingly extreme and complex conditions. Almost 20 years ago, high-pressure neutron experiments performed by a UK team at ISIS showed that surfactants could be designed to enhance the solubility of liquid carbon dioxide, potentially unlocking a vast array of applications in the food and pharmaceutical industries as an environmentally friendly alternative to traditional petrochemical solvents (Langmuir 22 9832).

Today, further developments in sample environment, detector technology and data analysis software enable us to observe chemical processes in real time, with materials kept under conditions that closely mimic their actual use. Recently, neutron imaging was used by a team from the UK and Germany to monitor a catalyst used widely in the chemical industry to improve the efficiency of reactions (Chem. Commun. 59 12767). Few methods can observe what is happening during a reaction, but neutron imaging was able to visualize it in real time.

Another discovery made just after ISIS became operational was the chemical buckminsterfullerene or “buckyball”. Buckyballs are a molecular form of carbon that consists of 60 carbon atoms arranged in a spherical structure, resembling a football. The scientists who first synthesized this molecule were awarded the Nobel Prize for Chemistry in 1996, and in the years following this discovery, researchers have studied this form of carbon using a range of techniques, including neutron scattering.

Ensembles of buckyballs can form a crystalline solid, and in the early 1990s studies of crystalline buckminsterfullerene at ISIS revealed that, while adjacent molecules are oriented randomly at room temperature, they transition to an ordered structure below 249 K to minimize their energy (Nature 353 147).

buckyballs

Four decades on, fullerenes (the family of materials that includes buckyballs) continue to present many research opportunities. Through a process known as “molecular surgery”, synthetic chemists can create an opening in the fullerene cage, enabling them to insert an atom, ion or molecular cluster. Neutron-scattering studies at ISIS were recently used to characterize helium atoms trapped inside buckyballs (Phys. Chem. Chem. Phys. 25 20295). These endofullerenes are helping to improve our understanding of the quantum mechanics associated with confined particles and have potential applications ranging from photovoltaics to drug delivery.

Just as they shed light on materials of the future, neutrons and muons also offer a unique glimpse into the materials, methods and cultures of the past. At ISIS, the penetrative and non-destructive nature of neutrons and muons has been used to study many invaluable cultural heritage objects from ancient Egyptian lizard coffins (Sci. Rep. 13 4582) to Samurai helmets (Archaeol. Anthropol. Sci. 13 96), deepening our understanding of the past without damaging any of these precious artifacts.

Looking within, and to the future

If you want to understand how things structurally fail, you must get right inside and look, and the neutron’s ability to penetrate deep into materials allows engineers to do just that. ISIS’s Engin-X beamline measures the strain within a crystalline material by measuring the spacing between atomic lattice planes. This has been used by sectors including aerospace, oil and gas exploration, automotive, and renewable power.

Recently, ISIS has also been attracting electronics companies looking to use the facility to irradiate their chips with neutrons. This can mimic the high-energy neutrons generated in the atmosphere by cosmic rays, which can cause reliability problems in electronics. So, when you next fly, drive or surf the web, ISIS may just have had a hand in it.

Series of circuit boards attached to steel rods connected with cables

With its many discoveries and developments, ISIS has succeeded in proving Chadwick wrong over the past 40 years, and the facility is now setting its sights on the upcoming decades of neutron-scattering research. “While predicting the future of scientific research is challenging, we can anchor our activities around a couple of trends,” explains ISIS associate director Sean Langridge. “Our community will continue to pursue fundamental research for its intrinsic societal value by discovering, synthesizing and processing new materials. Furthermore, we will use the capabilities of neutrons to engineer and optimize a material’s functionality, for example, to increase operational lifetime and minimize environmental impact.”

The capability requirements will continue to become more complex and, as they do so, the amount of data produced will also increase. The extensive datasets produced at ISIS are well suited for machine-learning techniques. These can identify new phenomena that conventional methods might overlook, leading to the discovery of novel materials.

As ISIS celebrates its 40th anniversary of neutron production, the use of neutrons continues to provide huge value to the physics community. A feasibility and design study for a next-generation neutron and muon source is now under way. Despite four decades of neutrons proving their worth, there is still much to discover over the coming decades of UK neutron and muon science.

The post From buckyballs to biological membranes: ISIS celebrates 40 years of neutron science appeared first on Physics World.

]]>
Feature As ISIS – the UK’s muon and neutron source – turns 40, Rosie de Laune and colleagues from ISIS explore the past, present and future of neutron scattering https://physicsworld.com/wp-content/uploads/2024/10/2024-10-DeLaune-neutron-scattering-abstract-2499733271-Shutterstock_Tom-Korcak.jpg newsletter1
Optical technique measures intramolecular distances with angstrom precision https://physicsworld.com/a/optical-technique-measures-intramolecular-distances-with-angstrom-precision/ Mon, 28 Oct 2024 14:04:08 +0000 https://physicsworld.com/?p=117694 Modified MINFLUX approach could be used to study biological processes inside cells

The post Optical technique measures intramolecular distances with angstrom precision appeared first on Physics World.

]]>
Physicists in Germany have used visible light to measure intramolecular distances smaller than 10 nm thanks to an advanced version of an optical fluorescence microscopy technique called MINFLUX. The technique, which has a precision of just 1 angstrom (0.1 nm), could be used to study biological processes such as interactions between proteins and other biomolecules inside cells.

In conventional microscopy, when two features of an object are separated by less than half the wavelength of the light used to image them, they will appear blurry and indistinguishable due to diffraction. Super-resolution microscopy techniques can, however, overcome this so-called Rayleigh limit by exciting individual fluorescent groups (fluorophores) on molecules while leaving neighbouring fluorophores alone, meaning they remain dark.

One such technique, known as nanoscopy with minimal photon fluxes, or MINFLUX, was invented by the physicist Stefan Hell. First reported in 2016 by Hell’s team at the Max Planck Institute (MPI) for Multidisciplinary Sciences in Göttingen, MINFLUX first “switches on” individual molecules, then determines their position by scanning a beam of light with a doughnut-shaped intensity profile across them.

The problem is that at distances of less than 5 to 10 nm, most fluorescent molecules start interacting with each other. This means they cannot emit fluorescence independently – a prerequisite for reliable distance measurements, explains Steffen Sahl, who works with Hell at the MPI.

Non-interacting fluorescent dye molecules

To overcome this problem, the team turned to a new type of fluorescent dye molecule developed in Hell’s research group. These molecules can be switched on in succession using UV light, but they do not interact with each other. This allows the researchers to mark the positions they want to measure with single fluorescent molecules and record their locations independently, to within as little as 0.1 nm, even when the dye molecules are close together.

“The localization process boils down to relating the unknown position of the fluorophore to the known position of the centre of the doughnut beam, where there is minimal or ideally zero excitation light intensity,” explains Hell. “The distance between the two can be inferred from the excitation (and hence the fluorescence) rate of the fluorophore.”

The advantage of MINFLUX, Hell tells Physics World, is that the closer the beam’s intensity minimum gets to the fluorescent molecule, the fewer fluorescence photons are needed to pinpoint the molecule’s location. This takes the burden of producing localizing photons – in effect, tiny lighthouses signalling “Here I am!” – away from the relatively weakly-emitting molecule and shifts it onto the laser beam, which has photons to spare. The overall effect is to reduce the required number of detected photons “typically by a factor of 100”, Hell says, adding that this translates into a 10-fold increase in localization precision compared to traditional camera-based techniques.

“A real alternative” to existing measurement methods

The researchers demonstrated their technique by precisely determining distances of 1–10 nanometres in polypeptides and proteins. To prove that they were indeed measuring distances smaller than the size of these molecules, they used molecules of a different substance, polyproline, as “rulers” of various lengths.

Polyproline is relatively stiff and was used for a similar purpose in early demonstrations of a method called Förster resonance energy transfer (FRET) that is now widely used in biophysics and molecular biology. However, FRET suffers from fundamental limitations on its accuracy, and Sahl thinks the “arguably surprising” 0.1 nm precision of MINFLUX makes it “a real alternative” for monitoring sub-10-nm distances.

While it had long been clear that MINFLUX should, in principle, be able to resolve distances at the < 5 nm scale and measure them to sub-nm precision, Hell notes that it had not been demonstrated at this scale until now. “Showing that the technique can do this is a milestone in its development and demonstration,” he says. “It is exciting to see that we can resolve fluorescence molecules that are so close together that they literally touch.” Being able to measure these distances with angstrom precision is, Hell adds, “astounding if your bear in mind that all this is done with freely propagating visible light focused by a conventional lens”.

“I find it particularly fascinating that we have now gone to the very size scale of biological molecules and can quantify distances even within them, gaining access to details of their conformation,” Sahl adds.

The researchers say that one of the key prerequisites for this work (and indeed all super-resolution microscopy developed to date) was the sequential ON/OFF switching of the fluorophores emitting fluorescence. Because any cross-talk between the two molecules would have been problematic, one of the main challenges was to identify fluorescence molecules with truly independent behaviour – that is, ones in which the silent (OFF-state) molecule did not affect its emitting (ON-state) neighbour and vice versa.

Looking forward, Hell says he and his colleagues are now looking to develop and establish MINFLUX as a standard tool for unravelling and quantifying the mechanics of proteins.

The research is published in Science.

The post Optical technique measures intramolecular distances with angstrom precision appeared first on Physics World.

]]>
Research update Modified MINFLUX approach could be used to study biological processes inside cells https://physicsworld.com/wp-content/uploads/2024/10/adj7368_Science_Sahletal_600dpi_Press_Image-01.jpg newsletter1
Daily adaptive proton therapy employed in the clinic for the first time https://physicsworld.com/a/daily-adaptive-proton-therapy-employed-in-the-clinic-for-the-first-time/ Mon, 28 Oct 2024 08:30:47 +0000 https://physicsworld.com/?p=117676 Researchers in Switzerland have integrated online daily adaptation into the clinical proton therapy workflow

The post Daily adaptive proton therapy employed in the clinic for the first time appeared first on Physics World.

]]>
Adaptive radiotherapy – in which a patient’s treatment is regularly replanned throughout their course of therapy – can compensate for uncertainties and anatomical changes and improve the accuracy of radiation delivery. Now, a team at the Paul Scherrer Institute’s Center for Proton Therapy has performed the first clinical implementation of an online daily adaptive proton therapy (DAPT) workflow.

Proton therapy benefits from a well-defined Bragg peak range that enables highly targeted dose delivery to a tumour while minimizing dose to nearby healthy tissues. This precision, however, also makes proton delivery extremely sensitive to anatomical changes along the beam path – arising from variations in mucus, air, muscle or fat in the body – or changes in the tumour’s position and shape.

“For cancer patients who are irradiated with protons, even small changes can have significant effects on the optimal radiation dose,” says first author Francesca Albertini in a press statement.

Online plan adaptation, where the patient remains on the couch during the replanning process, could help address the uncertainties arising from anatomical changes. But while this technique is being introduced into photon-based radiotherapy, daily online adaptation has not yet been applied to proton treatments, where it could prove even more valuable.

To address this shortfall, Albertini and colleagues developed a three-phase DAPT workflow, describing the procedure in Physics in Medicine & Biology. In the pre-treatment phase, two independent plans are created from the patient’s planning CT: a “template plan” that acts as a reference for the online optimized plan, and a “fallback plan” that can be selected on any day as a back-up if necessary.

Next, the online phase involves acquiring a daily CT before each irradiation, while the patient is on the treatment couch. For this, the researchers use an in-room CT-on-rails with a low-dose protocol. They then perform a fully automated re-optimization of the treatment plan based on the daily CT image. If the adapted plan meets the required clinical goals and passes an automated quality assurance (QA) procedure, it is used to treat the patient. If not, the fallback plan is delivered instead.

Finally, in the offline phase, the delivered dose in each fraction is recalculated retrospectively from the log files using a Monte Carlo algorithm. This step enables the team to accurately assess the dose delivered to the patient each day.

First clinical implementation

The researchers employed their DAPT protocol in five adults with tumours in rigid body regions, such as the brain or skull base. As this study was designed to demonstrate proof-of-principle and ensure clinical safety, they specified some additional constraints: only the last few consecutive fractions of each patient’s treatment course were delivered using DAPT; the plans used standard field arrangements and safety margins; and the template and fallback plans were kept the same.

“It’s important to note that these criteria are not optimized to fully exploit the potential clinical benefits of our approach,” the researchers write. “As our implementation progresses and matures, we anticipate refining these criteria to maximize the clinical advantages offered by DAPT.”

Across the five patients, the team performed DAPT for 26 treatment fractions. In 22 of these, the online adapted plans were chosen for delivery. In three fractions, the fallback plan was chosen due to a marginal dose increase to a critical structure, while for one fraction, the fallback plan was utilized due to a miscommunication. The team emphasize that all of the adapted plans passed the online QA steps and all agreed well with the log file-based dose calculations.

The daily adapted plans provided target coverage to within 1.1% of the planned dose and, in 92% of fractions, exhibited improved dose metrics to the targets and/or organs-at-risk (OARs). The researchers observed that a non-DAPT delivery (using the fallback plan) could have significantly increased the maximum dose to both the target and OARs. For one patient, this would have increased the dose to their brainstem by up to 10%. In contrast, the DAPT approach ensured that the OAR doses remained within the 5% threshold for all fractions.

Albertini emphasizes, however, that the main aim of this feasibility study was not to demonstrate superior plan quality with DAPT, but rather to establish that it could be implemented safely and efficiently. “The observed decrease in maximum dose to some OARs was a bonus and reinforces the potential benefits of adaptive strategies,” she tells Physics World.

Importantly, the DAPT process took just a few minutes longer than a non-adaptive session, averaging just above 23 min per fraction (including plan adaptation and assessment of clinical goals). Keeping the adaptive treatment within the typical 30-min time slot allocated for a proton therapy fraction is essential to maintain the patient workflow.

To reduce the time requirement, the team automated key workflow components, including the independent dose calculations. “Once registration between the daily and reference images is completed, all subsequent steps are automatically processed in the background, while the users are evaluating the daily structure and plan,” Albertini explains. “Once the plan is approved, all the QA has already been performed and the plan is ready to be delivered.

Following on from this first-in-patient demonstration, the researchers now plan to use DAPT to deliver full treatments (all fractions), as well as to enable margin reduction and potentially employ more conformal beam angles. “We are currently focused on transitioning our workflow to a commercial treatment planning system and enhancing it to incorporate deformable anatomy considerations,” says Albertini.

The post Daily adaptive proton therapy employed in the clinic for the first time appeared first on Physics World.

]]>
Research update Researchers in Switzerland have integrated online daily adaptation into the clinical proton therapy workflow https://physicsworld.com/wp-content/uploads/2024/10/28-10-24-Francesca-Albertini.jpg newsletter1
Imaging method could detect Parkinson’s disease up to 20 years before symptoms appear https://physicsworld.com/a/imaging-method-could-detect-parkinsons-disease-up-to-20-years-before-symptoms-appear/ Fri, 25 Oct 2024 12:45:18 +0000 https://physicsworld.com/?p=117671 A technique that combines super-resolution microscopy with advanced computational analysis could identify early signs of Parkinson’s disease

The post Imaging method could detect Parkinson’s disease up to 20 years before symptoms appear appeared first on Physics World.

]]>
Researchers at Tel Aviv University in Israel have developed a method to detect early signs of Parkinson’s disease at the cellular level using skin biopsies. They say that this capability could enable treatment up to 20 years before the appearance of motor symptoms characteristic of advanced Parkinson’s. Such early treatment could reduce neurotoxic protein aggregates in the brain and help prevent the irreversible loss of dopamine-producing neurons.

Parkinson’s disease is the second most common neurodegenerative disease in the world. The World Health Organization reports that its prevalence has doubled in the past 25 years, with more than 8.5 million people affected in 2019. Diagnosis is currently based on the onset of clinical motor symptoms. By the time of diagnosis, however, up to 80% of dopaminergic neurons in the brain may already be dead.

The new method combines a super-resolution microscopy technique, known as direct stochastic optical reconstruction microscopy (dSTORM), with advanced computational analysis to identify and map the aggregation of alpha-synuclein (αSyn), a synaptic protein that regulates transmission in nerve terminals. When it aggregates in brain neurons, αSyn causes neurotoxicity and impacts the central nervous system. In Parkinson’s disease, αSyn begins to aggregate about 15 years before motor symptoms appear.

Importantly, αSyn aggregates also accumulate in the skin. With this in mind, principal investigator Uri Ashery and colleagues developed a method for quantitative assessment of Parkinson’s pathology using skin biopsies from the upper back. The technique, which enables detailed characterization of nano-sized αSyn aggregates, will hopefully facilitate the development of a new molecular biomarker for Parkinson’s disease.

“We hypothesized that these αSyn aggregates are essential for understanding αSyn pathology in Parkinson’s disease,” the researchers write. “We created a novel platform that revealed a unique fingerprint of αSyn aggregates. The analysis detected a larger number of clusters, clusters with larger radii, and sparser clusters containing a smaller number of localizations in Parkinson’s disease patients relative to what was seen with healthy control subjects.”

The researchers used dSTORM to analyse skin biopsies from seven patients with Parkinson’s disease and seven healthy controls, characterizing nanoscale αSyn based on quantitative parameters such as aggregate size, shape, distribution, density and composition.

Super-resolution imaging

Their analysis revealed a significant decrease in the ratio of neuronal marker molecules to phosphorylated αSyn molecules (the pathological form of αSyn) in biopsies from Parkinson’s disease patients, suggesting the existence of damaged nerve cells in fibres enriched with phosphorylated αSyn.

The researchers determined that phosphorylated αSyn is organized into dense aggregates of approximately 75 nm in size. They also found that that patients with Parkinson’s disease had a higher number of αSyn aggregates than the healthy controls, with larger αSyn clusters (75 nm compared with 69 nm).

“Parkinson’s disease diagnosis based on quantitative parameters represents an unmet need that offers a route to revolutionize the way Parkinson’s disease and potentially other neurodegenerative diseases are diagnosed and treated,” Ashery and colleagues conclude.

In the next phase of this work, supported by the Michael J. Fox Foundation for Parkinson’s Research, the researchers will increase the number of subjects to 90 to identify differences between patients with Parkinson disease and healthy subjects.

“We intend to pinpoint the exact juncture at which a normal quantity of proteins turns into a pathological aggregate,” says lead author Ofir Sade in a press statement. “In addition, we will collaborate with computer science researchers to develop a machine learning algorithm that will identify correlations between results of motor and cognitive tests and our findings under the microscope. Using this algorithm, we will be able to predict future development and severity of various pathologies.”

“The machine learning algorithm is intended to spot young individuals at risk for Parkinson’s,” Ashery adds. “Our main target population are relatives of Parkinson’s patients who carry mutations that increase the risk for the disease.”

The researchers report their findings in Frontiers in Molecular Neuroscience.

The post Imaging method could detect Parkinson’s disease up to 20 years before symptoms appear appeared first on Physics World.

]]>
Research update A technique that combines super-resolution microscopy with advanced computational analysis could identify early signs of Parkinson’s disease https://physicsworld.com/wp-content/uploads/2024/10/25-10-24-Uri-Ashery-and-Ofir-Sade.jpg
Ask me anything: Raghavendra Srinivas – ‘Experimental physics is never boring’ https://physicsworld.com/a/ask-me-anything-raghavendra-srinivas-experimental-physics-is-never-boring/ Fri, 25 Oct 2024 07:59:11 +0000 https://physicsworld.com/?p=117480 Quantum scientist Raghavendra Srinivas thinks young researchers shouldn’t be afraid to ask questions

The post Ask me anything: Raghavendra Srinivas – ‘Experimental physics is never boring’ appeared first on Physics World.

]]>
What skills do you use every day in your job?

One of my favourite parts of being an atomic physicist is the variety. I get to work with lasers, vacuums, experimental control software, simulations, data analysis and physics theory.

As I’m transitioning to a more senior position, the skills I use have changed. Rather than doing most of the lab-based work myself, I now have a more supervisory role on some projects. I go to the lab when I can but it’s certainly different. I’m also teaching a second-year quantum mechanics course, which requires its own skillset. I try to use my experience to impart more of an experimental flavour. The field is now in an exciting place where we can not only think about experiments with single quantum systems, but actually do them.

It’s important to have the right structures in place to deliver complex projects with many moving parts

I also work part-time at a trapped-ion quantum computing company, Oxford Ionics, which has grown from about 20 to over 60 people since I started in 2021. Being involved in a team with so many people has taught me a lot about the importance of project management. It’s important to have the right structures in place to deliver complex projects with many moving parts. In addition, most of my company colleagues are also not physicists; it’s important to be able to communicate with people across a range of disciplines.

What do you like best and least about your job?

Experimental physics is never boring, as experiments always find new and wonderful ways to break: 90–99% of the time something needs fixing, but when it works it’s just magical.

I’ve been incredibly lucky to work with a fantastic group of people wherever I’ve been. Experimental physics cannot be done alone and I feel very privileged to work with colleagues who are passionate about what they do and have a wide variety of skills.

I also love the opportunities for outreach activities that my position affords me. Since I started at Oxford, I have led work placements as part of In2scienceUK and more recently helped start a week-long summer school for school students with the National Quantum Computing Centre. In many ways, I think promoting the idea that a career in quantum physics is accessible to anyone as long as they are willing to work hard is the most impactful work I can do.

I do dislike that as you spend longer in a field, more and more non-lab-based tasks creep into your calendar. I also find it difficult to switch between different tasks but that’s the price to pay for being involved in multiple projects.

What do you know today, that you wish you knew when you were starting out in your career?

It’s a difficult feeling for me to shake off even now, but when I started my career, I used to feel afraid to ask questions when I didn’t know something. I think it’s easy to fall into the trap of thinking it’s your fault, or that others will think less of you. However, I believe it’s better to see these instances as opportunities to learn rather than being embarrassed.

Scientifically, I think it’s also really important to be able to take a step back from the weeds of technical work and have an idea of the big-picture physics you’re trying to solve. I would have encouraged my past self to spend more time thinking deeply about physics, even beyond the field I was in. Just a couple of hours a week adds up over time without really taking away from other work.

It’s easy to pour yourself completely into a project, but it’s important to do this sustainably and avoid burnout

One last thing I’d tell my past self is to think about boundaries and find a healthy work-life balance. It’s easy to pour yourself completely into a project, but it’s important to do this sustainably and avoid burnout. Other aspects of life are important too.

The post Ask me anything: Raghavendra Srinivas – ‘Experimental physics is never boring’ appeared first on Physics World.

]]>
Interview Quantum scientist Raghavendra Srinivas thinks young researchers shouldn’t be afraid to ask questions https://physicsworld.com/wp-content/uploads/2024/10/2024-10-AMA-Srinivas-LISTING.jpg newsletter
Julia Sutcliffe: chief scientific adviser explains why policymaking must be underpinned by evidence https://physicsworld.com/a/julia-sutcliffe-chief-scientific-advisor-explains-why-policymaking-must-be-underpinned-by-evidence/ Thu, 24 Oct 2024 12:23:33 +0000 https://physicsworld.com/?p=117656 Exploring a career in physics, systems engineering and advising the UK's Department for Business and Trade

The post Julia Sutcliffe: chief scientific adviser explains why policymaking must be underpinned by evidence appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast, features the physicist and engineer Julia Sutcliffe, who is chief scientific adviser to the UK government’s Department for Business and Trade.

In a wide-ranging conversation with Physics World’s Matin Durrani, Sutcliffe explains how she began her career as a PhD physicist before working in systems engineering at British Aerospace – where she worked on cutting-edge technologies including robotics, artificial intelligence, and autonomous systems. They also chat about Sutcliffe’s current role advising the UK government to ensure that policymaking is underpinned by the best evidence.

The post Julia Sutcliffe: chief scientific adviser explains why policymaking must be underpinned by evidence appeared first on Physics World.

]]>
Podcasts Exploring a career in physics, systems engineering and advising the UK's Department for Business and Trade https://physicsworld.com/wp-content/uploads/2024/10/Julia-Sutcliffe-1280-player.jpg newsletter
Eco-friendly graphene composite recovers gold from e-waste https://physicsworld.com/a/eco-friendly-graphene-composite-recovers-gold-from-e-waste/ Thu, 24 Oct 2024 09:55:03 +0000 https://physicsworld.com/?p=117637 New graphene-biopolymer material extracts gold ions 10 times more efficiently than other adsorbents

The post Eco-friendly graphene composite recovers gold from e-waste appeared first on Physics World.

]]>
A new type of composite material is 10 times more efficient at extracting gold from electronic waste than previous adsorbents. Developed by researchers in Singapore, the UK and China, the environmentally-friendly composite is made from graphene oxide and a natural biopolymer called chitosan, and it filters the gold without an external power source, making it an attractive alternative to older, more energy-intensive techniques.

Getting better at extracting gold from electronic waste, or e-waste, is desirable for two reasons. As well as reducing the volume of e-waste, it would lessen our reliance on mining and refining new gold, which involves environmentally hazardous materials such as activated carbon and cyanides. Electronic waste management is a relatively new field, however, and existing techniques like electrolysis are time-consuming and require a lot of energy.

A more efficient and suitable recovery process

Led by Kostya Novoselov and Daria Andreeva of the Institute for Functional Intelligent Materials at the National University of Singapore, the researchers chose graphene and chitosan because both have desirable characteristics for gold extraction. Graphene boasts a high surface area, making it ideal for adsorbing ions, they explain, while chitosan acts as a natural reducing agent, catalytically converting ionic gold into its solid metallic form.

While neither material is efficient enough to compete with conventional methods such as activated carbon on its own, Andreeva says they work well together. “By combining both of them, we enhance both the adsorption capacity of graphene and the catalytic reduction ability of chitosan,” she explains. “The result is a more efficient and suitable gold recovery process.”

High extraction efficiency

The researchers made the composite by getting one-dimensional chitosan macromolecules to self-assemble on two-dimensional flakes of graphene oxide. This assembly process triggers the formation of sites that bind gold ions. The enhanced extracting ability of the composite comes from the fact that the ion binding is cooperative, meaning that an ion binding at one site allows other ions to bind, too. The team had previously used similar methods in studies that focused on structures such as novel membranes with artificial ionic channels, anticorrosion coatings, sensors and actuators, switchable water valves and bioelectrochemical systems.

Once the gold ions are adsorbed onto the graphene surface, the chitosan catalyses the reduction of these ions, converting them from their ionic state into solid metallic gold, Andreeva explains. “This combined action of adsorption and reduction makes the process both highly efficient and environmentally friendly, as it avoids the use of harsh chemicals typically employed in gold recovery from electronic waste,” she says.

The researchers tested the material on a real waste mixture provided by SG Recycle Group SG3R, Pte, Ltd. Using this mixture, which contained gold in a residual concentration of just 3 ppm, they showed that the composite can extract nearly 17g/g of Au3+ ions and just over 6 g/g of Au+ from a solution – values that are 10 times larger than existing gold adsorbents. The material also has an extraction efficiency of above 99.5 percent by weight (wt%), breaking the current of limit of 75 wt%. To top it off, the ion extraction process is ultrafast, taking around just 10 minutes compared to days for other graphene-based adsorbents.

No applied voltage required

The researchers, who report their work in PNAS, say that the multidimensional architecture of the composite’s structure means that no applied voltage is required to adsorb and reduce gold ions. Instead, the technique relies solely on the chemisorption kinetics of gold ions on the heterogenous graphene oxide/chitosan nanoconfinement channels and the chemical reduction at multiple binding sites. The new process therefore offers a cleaner, more efficient and environmentally-friendly method for recovering gold from electronic waste, they add.

While the present work focused on gold, the team say the technique could be adapted to recover other valuable metals such as silver, platinum or palladium from electronic waste or even mining residues. And that is not all: as well as e-waste, the technology might be applied to a wider range of environmental cleaning efforts, such as filtering out heavy metals from polluted water sources or industrial effluents. “It thus provides a solution for reducing metal contamination in ecosystems,” Andreeva says.

Other possible applications areas, she adds, include sustainable decarbonization and hydrogen production, low-dimensional building blocks for embedding artificial neural networks in hardware for neuromorphic computing and biomedical applications.

The Singapore researchers are now studying how to regenerate and reuse the composite material itself, to further reduce waste and improve the process’s sustainability. “Our ongoing research is focusing on optimizing the material’s properties, bringing us closer to a scalable, eco-friendly solution for e-waste management and beyond,” Andreeva says.

The post Eco-friendly graphene composite recovers gold from e-waste appeared first on Physics World.

]]>
Research update New graphene-biopolymer material extracts gold ions 10 times more efficiently than other adsorbents https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_24-14449-1.jpg
Cosmic antimatter could be created by annihilating WIMPs https://physicsworld.com/a/cosmic-antimatter-could-be-created-by-annihilating-wimps/ Wed, 23 Oct 2024 17:34:28 +0000 https://physicsworld.com/?p=117646 Detection of antideuterons and antihelium could help hone dark-matter models

The post Cosmic antimatter could be created by annihilating WIMPs appeared first on Physics World.

]]>
Weakly interacting massive particles (WIMPs) are prime candidates for dark matter – but the hypothetical particles have never been observed directly. Now, an international group of physicists has proposed a connection between WIMPs and the higher-than-expected flux of antimatter cosmic rays  detected by NASA’s Alpha Magnetic Spectrometer (AMS-02) on the International Space Station.

Cosmic rays are high-energy charged particles that are created by a wide range of astrophysical processes including supernovae and the violent regions surrounding supermassive black holes. The origins of cosmic rays are not fully understood so they offer physicists opportunities to look for phenomena not described by the Standard Model of particle physics. This includes dark matter, a hypothetical substance that could account for about 85% of the mass in the universe.

If WIMPs exist, physicists believe that they would occasionally annihilate when they encounter one another to create matter and antimatter particles. Because WIMPs are very heavy, it is possible that these annihilations create antinuclei – the antimatter version of nuclei comprising antiprotons and antineutrons. Some of these antinuclei could make their way to Earth and be detected as cosmic rays

Now, a trio of researchers in Spain, Sweden, and the US has done new calculations that suggest that unexpected antinuclei detections made by AMS-02 could shed light on the nature of dark matter. The trio is led by Pedro De La Torre Luque at the Autonomous University of Madrid.

Heavy antiparticles

According to the Standard Model of particle physics, antinuclei should be an extremely small component of the cosmic rays measured by AMS-02. However, excesses of antideuterons (antihydrogen-2), antihelium-3  and antihelium-4 have been glimpsed in data gathered by AMS-02.

In previous work, De La Torre Luque and colleagues explored the possibility that these antinuclei emerged through the annihilation of WIMPs. Using AMS-02 data, the team put new constraints on the hypothetical properties of WIMPs.

Now, the trio has built on this work. “With this information, we calculated the fluxes of antideuterons and antihelium that AMS-02 could detect: both from dark matter, and from cosmic ray interactions with gas in the interstellar medium,” De La Torre Luque says. “In addition, we estimated the maximum possible flux of antinuclei from WIMP dark matter.”

This allowed the researchers to test whether AMS-02’s cosmic ray measurements are really compatible with standard WIMP models. According to De La Torre Luque, their analysis had mixed implications for WIMPs.

“We found that while the antideuteron events measured by AMS-02 are well compatible with WIMP dark matter annihilating in the galaxy, only in optimistic cases can WIMPs explain the detected events of antihelium-3,” he explains. “No standard WIMP scenario can explain the detection of antihelium-4.”

Altogether, the team’s results are promising for proponents of the idea that WIMPs are a component of dark matter. However, the research also suggest that the WIMP model in its current form is incomplete. To be consistent with the AMS-02 data, the researchers believe that a new WIMP model must further push the bounds of the Standard Model.

“If these measurements are robust, we may be opening the window for something very exotic going on in the galaxy, that could be related to dark matter, says De La Torre Luque. But it could also reveal some unexpected new phenomenon in the universe”. Ultimately, the researchers hope that the precision of their antinuclei measurements could bring us a small step closer to solving one of the deepest, most enduring mysteries in physics.

The research is described in the Journal of Cosmology and Astroparticle Physics.

The post Cosmic antimatter could be created by annihilating WIMPs appeared first on Physics World.

]]>
Research update Detection of antideuterons and antihelium could help hone dark-matter models https://physicsworld.com/wp-content/uploads/2024/10/23-10-2024-ISS-50_EVA-1_b_Alpha_Magnetic_Spectrometer.jpg newsletter
First look at prototype telescope for the LISA gravitational-wave mission https://physicsworld.com/a/first-look-at-prototype-telescope-for-the-lisa-gravitational-wave-mission/ Wed, 23 Oct 2024 10:30:09 +0000 https://physicsworld.com/?p=117632 The telescopes will be used to send and receive infrared laser beams between the three satellites in space

The post First look at prototype telescope for the LISA gravitational-wave mission appeared first on Physics World.

]]>
NASA has released the first images of a full-scale prototype for the six telescopes that will be included in the €1.5bn Laser Interferometer Space Antenna (LISA) mission.

Expected to launch in 2035 and operate for at least four year, LISA is a space-based gravitational-wave mission led by the European Space Agency.

It will comprise of three identical satellites that will be placed in an equilateral triangle in space, with each side of the triangle being 2.5 million kilometers – more than six times the distance between the Earth and the Moon.

The three craft will send infrared laser beams to each other via twin telescopes in the satellites. The beams will be sent to free-floating golden cubes – each slightly smaller than a Rubik’s cube — that are placed inside the craft.

The system will be able to measure the separation between the cubes down to picometers, or trillionths of a meter. Such subtle changes in the distances between the measured laser beams will indicate the presence of a gravitational wave.

The prototype telescope, dubbed the Engineering Development Unit Telescope, was manufactured and assembled by L3Harris Technologies in Rochester, New York.

It is made entirely from an amber-coloured glass-ceramic called Zerodur, which has been manufactured by Schott in Mainz, Germany. The primary mirror of the telescopes is coated in gold to better reflect the infrared lasers and reduce heat loss.

On 25 January ESA’s Science Programme Committee formally approved the start of construction of LISA.

The post First look at prototype telescope for the LISA gravitational-wave mission appeared first on Physics World.

]]>
Blog The telescopes will be used to send and receive infrared laser beams between the three satellites in space https://physicsworld.com/wp-content/uploads/2024/10/GSFC_LISA_small.jpg newsletter
Orbital angular momentum monopoles appear in a chiral crystal https://physicsworld.com/a/orbital-angular-momentum-monopoles-appear-in-a-chiral-crystal/ Wed, 23 Oct 2024 09:00:20 +0000 https://physicsworld.com/?p=117626 Experimental observations at the Swiss Light Source could advance the development of energy-efficient memory devices based on "orbitronics"

The post Orbital angular momentum monopoles appear in a chiral crystal appeared first on Physics World.

]]>
Magnets generally have two poles, north and south, so observing something that behaves like it has only one is extremely unusual. Physicists in Germany and Switzerland have become the latest to claim this rare accolade by making the first direct detection of structures known as orbital angular momentum monopoles. The monopoles, which the team identified in materials known as chiral crystals, had previously only been predicted in theory. The discovery could aid the development of more energy-efficient memory devices.

Traditional electronic devices use the charge of electrons to transfer energy and information. This transfer process is energy-intensive, however, so scientists are looking for alternatives. One possibility is spintronics, which uses the electron’s spin rather than its charge, but more recently another alternative has emerged that could be even more promising. Known as orbitronics, it exploits the orbital angular momentum (OAM) of electrons as they revolve around an atomic nucleus. By manipulating this OAM, it is in principle possible to generate large magnetizations with very small electric currents – a property that could be used to make energy-efficient memory devices.

Chiral topological semi-metals with “built-in” OAM textures

The problem is that materials that support such orbital magnetizations are hard to come by. However, Niels Schröter, a physicist at the Max Planck Institute of Microstructure Physics in Halle, Germany who co-led the new research, explains that theoretical work carried out in the 1980s suggested that certain crystalline materials with a chiral structure could generate an orbital magnetization that is isotropic, or uniform in all directions. “This means that the materials’ magnetoelectric response is also isotropic – it depends solely on the direction of the injected current and not on the crystals’ orientation,” Schröter says. “This property could be useful for device applications since it allows for a uniform performance regardless of how the crystal grains are oriented in a material.”

In 2019, three experimental groups (including the one involved in the latest work) independently discovered a type of material called a chiral topological semimetal that seemed to fit the bill. Atoms in these semimetals are arranged in a helical pattern, which produces something that behaves like a solenoid on the nanoscale, creating a magnetic field whenever an electric current passes through it.

The advantage of these materials, Schröter explains, is that they have “built-in” OAM textures. What is more, he says the specific texture discovered in the most recent work – an OAM monopole – is “special because the magnetic field response can be very large – and isotropic, too”.

Visualizing monopoles

Schröter and colleagues studied chiral topological semimetals made from either palladium and gallium or platinum and gallium (PdGa or PtGa). To understand the structure of these semimetals, they directed circularly polarized X-rays from the Swiss Light Source (SLS) onto samples of PdGa and PtGa prepared by Claudia Felser’s group at the Max Planck Institute in Dresden. In this technique, known as circular dichroism in angle-resolved photoemission spectroscopy (CD-ARPES), the synchrotron light ejects electrons from the sample, and the angles and energies of these electrons provide information about the material’s electronic structure.

“This technique essentially allows us to ‘visualize’ the orbital texture, almost like capturing an image of the OAM monopoles,” Schröter explains. “Instead of looking at the reflected light, however, we observe the emission pattern of electrons.” The new monopoles, he notes, reside in momentum (or reciprocal) space, which is the Fourier transform of our everyday three-dimensional space.

Complex data

One of the researchers’ main challenges was figuring out how to interpret the CD-ARPES data. This turned out to be anything but straightforward. Working closely with Michael Schüler’s theoretical modelling group at the Paul Scherrer Institute in Switzerland, they managed to identify the OAM textures hidden within the complexity of the measurement figures.

Contrary to what was previously thought, they found that the CD-ARPES signal was not directly proportional to the OAMs. Instead, it rotated around the monopoles as the energy of the photons in the synchrotron light source was varied. This observation, they say, proves that monopoles are indeed present.

The findings, which are detailed in Nature Physics, could have important implications for future magnetic memory devices. “Being able to switch small magnetic domains with currents passed through such chiral crystals opens the door to creating more energy-efficient data storage technologies, and possibly also logic devices,” Schröter says. “This study will likely inspire further research into how these materials can be used in practical applications, especially in the field of low-power computing.”

The researchers’ next task is to design and build prototype devices that exploit the unique properties of chiral topological semimetals. “Finding these monopoles has been a focus for us ever since I started my independent research group at the Max Planck Institute for Microstructure Physics in 2021,” Schröter tells Physics World. The team’s new goal, he adds, is to “demonstrate functionalities and create devices that can drive advancements in information technologies”.

To achieve this, he and his colleagues are collaborating with partners at the universities of Regensburg and Berlin. They aim to establish a new centre for chiral electronics that will, he says, “serve as a hub for exploring the transformative potential of chiral materials in developing next-generation technologies”.

The post Orbital angular momentum monopoles appear in a chiral crystal appeared first on Physics World.

]]>
Research update Experimental observations at the Swiss Light Source could advance the development of energy-efficient memory devices based on "orbitronics" https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_Hedgehog_Titelbild_16_9.jpg
Patient-specific quality assurance (PSQA) based on independent 3D dose calculation https://physicsworld.com/a/patient-specific-quality-assurance-psqa-based-on-independent-3d-dose-calculation/ Wed, 23 Oct 2024 09:28:12 +0000 https://physicsworld.com/?p=117273 Join the audience for a live webinar on 16 December 2024 sponsored by LAP GmbH Laser Applikationen

The post Patient-specific quality assurance (PSQA) based on independent 3D dose calculation appeared first on Physics World.

]]>

In this webinar, we will discuss that patient specific quality assurance (PSQA) is an essential component of the radiation treatment process. This control allows us to ensure that the planned dose will be delivered to the patient. The increasing number of patients with indications for modulated treatments requiring PSQA has significantly increased the workload of the medical physics departments, and the need to find more efficient ways to perform it has arisen.

In recent years, there has been an increasing evolution of measurement systems. However, the experimental process involved imposes a limit on the time savings. The 3D dose calculation systems are presented as a solution to this problem, allowing the reduction of the time needed for the initiation of treatments.

The use of 3D dose calculation systems, as stated in international recommendations (TG219), requires a process of commissioning and adjustment of dose calculation parameters.

This presentation will show the implementation of PSQA based on independent 3D dose calculation for VMAT treatments in breast cancer using DICOM information from the plan and LOG files. Comparative results with measurement-based PSQA systems will also be presented.

An interactive Q&A session follows the presentation.

Dr Daniel Venencia is the chief of the medical physics department at Instituto Zunino – Fundación Marie Curie in Cordoba, Argentina. He holds a BSc in physics and a PhD from the Universidad Nacional de Córdoba (UNC), Daniel has completed postgraduate studies in radiotherapy and nuclear medicine. With extensive experience in the field, Daniel has directed more than 20 MSc and BSc theses and three doctoral theses. He has delivered more than 400 presentations at national and international congresses. He has published in prestigious journals, including the Journal of Applied Clinical Medical Physics and the International Journal of Radiation Oncology, Biology and Physics. His work continues to make significant contributions to the advancement of medical physics.

Carlos Bohorquez, MS, DABR, is the product manager for RadCalc at LifeLine Software Inc., a part of the LAP Group. An experienced board-certified clinical physicist with a proven history of working in the clinic and medical device industry, Carlos’ passion for clinical quality assurance is demonstrated in the research and development of RadCalc into the future.

The post Patient-specific quality assurance (PSQA) based on independent 3D dose calculation appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 16 December 2024 sponsored by LAP GmbH Laser Applikationen https://physicsworld.com/wp-content/uploads/2024/10/Webinar_PSQA_RadCalc_Dec_2024.jpg
On the proper use of a Warburg impedance https://physicsworld.com/a/on-the-proper-use-of-a-warburg-impedance/ Wed, 23 Oct 2024 08:20:06 +0000 https://physicsworld.com/?p=116636 Join the audience for a live webinar on 4 December 2024 sponsored by Gamry Instruments, Inc., BioLogic, Scribner and Metrohm Autolab, in partnership with The Electrochemical Society

The post On the proper use of a Warburg impedance appeared first on Physics World.

]]>

Recent battery papers commonly employ interpretation models for which diffusion impedances are in series with interfacial impedance. The models are fundamentally flawed because the diffusion impedance should be part of the interfacial impedance. A general approach is presented that shows how the charge-transfer resistance and diffusion resistance are functions of the concentration of reacting species at the electrode surface. The resulting impedance model incorporates diffusion impedances as part of the interfacial impedance.

An interactive Q&A session follows the presentation.

Mark Orazem obtained his BS and MS degrees from Kansas State University and his PhD in 1983 from the University of California, Berkeley. In 1983, he began his career as assistant professor at the University of Virginia, and in 1988 joined the faculty of the University of Florida, where he is Distinguished Professor of Chemical Engineering and Associate Chair for Graduate Studies. Mark is a fellow of The Electrochemical Society, International Society of Electrochemistry, and American Association for the Advancement of Science. He served as President of the International Society of Electrochemistry and co-authored, with Bernard Tribollet of the Centre national de la recherche scientifique (CNRS), the textbook entitled Electrochemical Impedance Spectroscopy, now in its second edition. Mark received the ECS Henry B. Linford Award, ECS Corrosion Division H. H. Uhlig Award, and with co-author Bernard Tribollet, the 2019 Claude Gabrielli Award for contributions to electrochemical impedance spectroscopy. In addition to writing books, he has taught short courses on impedance spectroscopy for The Electrochemical Society since 2000.

 

The Electrochemical Society

The post On the proper use of a Warburg impedance appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 4 December 2024 sponsored by Gamry Instruments, Inc., BioLogic, Scribner and Metrohm Autolab, in partnership with The Electrochemical Society https://physicsworld.com/wp-content/uploads/2024/09/2024-12-04-webinar-image.jpg
Multi-qubit entangled states boost atomic clock and sensor performance https://physicsworld.com/a/multi-qubit-entangled-states-boost-atomic-clock-and-sensor-performance/ Tue, 22 Oct 2024 16:52:22 +0000 https://physicsworld.com/?p=117622 Greenberger–Horne–Zeilinger states increase measurement frequency

The post Multi-qubit entangled states boost atomic clock and sensor performance appeared first on Physics World.

]]>
Frequency measurements using multi-qubit entangled states have been performed by two independent groups in the US. These entangled states have correlated errors, resulting in measurement precisions better than the standard quantum limit. One team is based in Colorado and it measured the frequency of an atomic clock with greater precision than possible using conventional methods. The other group is in California and it showed how entangled states could be used in quantum sensing.

Atomic clocks are the most accurate timekeeping devices we have. They work by locking an ultraprecise, frequency comb laser to a narrow linewidth transition in an atom. The higher the transition’s frequency, the faster the clock ticks and the more precisely it can keep time. The clock with the best precision today is operated by Jun Ye’s group at JILA in Boulder, Colorado and colleagues. After running for the age of the universe, this clock would only be wrong by 0.01 s.

The conventional way of improving precision is to use higher-energy, narrower transitions such as those found in highly charged ions and nuclei. These pose formidable challenges, however, both in locating the transitions and in producing stable high-frequency lasers to excite them.

Standard quantum limit

An alternative is to operate existing clocks in more sophisticated ways. “In an optical atomic clock, you’re comparing the oscillations of an atomic superposition with the frequency of a laser,” explains JILA’s Adam Kaufman, “At the end of the experiment, that atom can only be in the excited state or in the ground state, so to get an estimate of the relative frequencies you need to sample that atom many times, and the precision goes like one over the square root of the number of samples.” This is the standard quantum limit, and is derived from the assumption that the atoms collapse randomly, producing random noise in the frequency estimate.

If, however, multiple atoms are placed into a Greenberger–Horne–Zeilinger (GHZ) entangled state and measured simultaneously, information can be acquired at a higher frequency without increasing the fundamental frequency of the transition. JILA’s Alec Cao explains, “Two atoms in a GHZ state are not just two independent atoms. Both the atoms are in the zero state, so the state has an energy of zero, or both the atoms are in the upper state so it has an energy of two. And as you scale the size of the system the energy difference increases.”

Unfortunately the lifetime of a GHZ state is inversely proportional to its size. Therefore, though precision can be acquired in a shorter time, the time window for measurement also drops, cancelling out the benefit. Mark Saffman of the University of Wisconsin-Madison explains, “This idea was suggested about 20 years ago that you could get around this by creating GHZ states of different sizes, and using the smallest GHZ state to measure the least significant bit of your measurement, and as you go to larger and larger GHZ states you’re adding more significant bits to your measurement result.”

In the Colorado experiment, Kaufman, Cao and colleagues used a novel, multi-qubit entangling technique to create GHZ states of Rydberg atoms in a programmable optical tweezer lattice. A Rydberg atom is an atom with one or more electrons in a highly-excited state. They showed that, when interrogated for short times, four-atom GHZ states achieved higher precisions than could be achieved with the same number of uncorrelated atoms. They also constructed gates of up to eight qubits. However, owing to their short lifetimes, they were unable to beat the standard quantum limit with these.

Cascade of GHZ qubits

The Colorado team therefore constructed a cascade of GHZ qubits of increasing sizes, with the largest containing eight atoms. They showed that the fidelity achieved by the cascade was superior to the fidelity achieved by a single large GHZ qubit. Cao compares this to using the large GHZ state on a clock as the second hand while progressively smaller states act as the minute and hour hands. The team did not demonstrate higher phase sensitivity than could theoretically be achieved with the same number of unentangled atoms, but Cao says this is simply a technical challenge.

Meanwhile in California, Manuel Endres and colleagues at Caltech also used GHZ states to do precision spectroscopy on the frequency of an atomic clock using Rydberg atoms in an optical tweezer array. They used a slightly different technique for preparing the GHZ states. This did not allow them to prepare such large GHZ states as their Coloradan counterparts, although Endres argues that their technique should be more scalable. The Caltech work, however, focused on mapping the output data onto “ancilla” qubits and demonstrating a universal set of quantum logic operations.

“The question is, ‘How can a quantum computer help you for a sensor?’” says Endres. “If you had a universal quantum computer that somehow produced a GHZ state on your sensor you could improve the sensing capabilities. The other thing is to take the signal from a quantum computer and do quantum post-processing on that signal. The vision in our [work] is to have a quantum computer integrated with a sensor.”

Saffman, who was not involved with either group, praises the work of both teams. He congratulates the Coloradans for setting out to build a better clock and succeeding – and praises the Californians for going in “another direction” with their GHZ states.  Saffman says he would like to see the researchers produce larger GHZ states and show that such states can not only confer an improvement on a clock with the same limitations as a similar clock measured with random atoms, but can produce the world’s best clock overall.

The research is described in two papers in nature Nature (California paper, Colorado paper).

The post Multi-qubit entangled states boost atomic clock and sensor performance appeared first on Physics World.

]]>
Research update Greenberger–Horne–Zeilinger states increase measurement frequency https://physicsworld.com/wp-content/uploads/2024/10/22-10-2024-GHZ-atomic-clock-team.jpg newsletter
Gems from the Physics World archive: Isaac Asimov https://physicsworld.com/a/gems-from-the-physics-world-archive-isaac-asimov/ Tue, 22 Oct 2024 15:20:02 +0000 https://physicsworld.com/?p=117617 Science-fiction fans in the Physics World team have a clear favourite from 36 years of articles

The post Gems from the <em>Physics World</em> archive: Isaac Asimov appeared first on Physics World.

]]>
Cartoon illustration of Isaac Asimov

Since 1988 Physics World has boasted among its authors some of the most eminent physicists of the 20th and 21st centuries, as well as some of the best popular-science authors. But while I am, in principle, aware of this, it can still be genuinely exciting to discover who wrote for Physics World before I joined the team in 2011. And for me – a self-avowed book nerd – the most exciting discovery was an article written by Isaac Asimov in 1990.

Asimov is best remembered for his hard science fiction. His Foundation trilogy (1951–1953) and decades of robot stories first collected in I, Robot (1950) are so seminal they have contributed words and concepts to the popular imagination, far beyond actual readers of his work. If you’ve ever heard of the Laws of Robotics (the first of which is that “a robot shall not harm a human, or by inaction allow a human to come to harm”), that was Asimov’s work.

I was introduced to Asimov through what remains the most “hard physics”-heavy sci-fi I have ever tackled: The Gods Themselves (1972). In this short novel, humans make contact with a parallel universe and manage to transfer energy from a parallel world to Earth. When a human linguist attempts to communicate with the “para-men”, he discovers this transfer may be dangerous. The narrative then switches to the parallel world, which is populated by the most “alien” aliens I can remember encountering in fiction.

Underlying this whole premise, though, is the fact that in the parallel world, the strong nuclear force, which binds protons and neutrons together, is even stronger than it is in our own. And Asimov was a good enough scientist that he worked into his novel everything that would be different – subtly or significantly – were this the case. It’s a physics thought experiment; a highly entertaining one that also encompasses ethics, astrobiology, cryptanalysis and engineering.

Of course, Asimov wrote non-fiction, too. His 500+ books include such titles as Understanding Physics (1966), Atom: Journey Across the Subatomic Cosmos (1991) and the extensive Library of the Universe series (1988–1990). The last two of these even came out while Physics World was being published.

So what did this giant of sci-fi and science communication write about for Physics World?

It was, of all things, a review of a book by someone else: specifically, Think of a Number by Malcolm E Lines, a British mathematician. Lines isn’t nearly so famous as his reviewer, but he was still writing popular-science books about mathematics as recently as 2020. Was Asimov impressed? You’ll have to read his review to find out.

The post Gems from the <em>Physics World</em> archive: Isaac Asimov appeared first on Physics World.

]]>
Blog Science-fiction fans in the Physics World team have a clear favourite from 36 years of articles https://physicsworld.com/wp-content/uploads/2024/10/Isaac-Asimov-featured-2278521053-Shutterstock_Mei-Zendra-editorial-use-only-scaled.jpg newsletter
Negative triangularity tokamaks: a power plant plasma solution from the core to the edge? https://physicsworld.com/a/negative-triangularity-tokamaks-a-power-plant-plasma-solution-from-the-core-to-the-edge/ Tue, 22 Oct 2024 10:05:24 +0000 https://physicsworld.com/?p=117448 IOP Publishing's journal, Plasma Science and Technologies explores the knowns and unknowns of negative triangularity and evaluate its future as a power plant solution

The post Negative triangularity tokamaks: a power plant plasma solution from the core to the edge? appeared first on Physics World.

]]>
The webinar is directly linked with a special issue of Plasma Physics and Controlled Fusion on Advances in the Physics Basis of Negative Triangularity Tokamaks; featuring contributions from all of the speakers, and many more papers from the leading groups researching this fascinating topic.

In recent years the fusion community has begun to focus on the practical engineering of tokamak power plants. From this, it became clear that the power exhaust problem, extracting the energy produced by fusion without melting the plasma-facing components, is just as important and challenging as plasma confinement. To these ends, negative triangularity plasma shaping holds unique promise.

Conceptually, negative triangularity is simple. Take the standard positive triangularity plasma shape, ubiquitous among tokamaks, and flip it so that the triangle points inwards. By virtue of this change in shape, negative triangularity plasmas have been experimentally observed to dramatically improve energy confinement, sometimes by more than a factor of two. Simultaneously, the plasma shape is also found to robustly prevent the transition to the improved confinement regime H-mode. While this may initially seem a drawback, the confinement improvement can enable negative triangularity to still achieve similar confinement to a positive triangularity H-mode. In this way, it robustly avoids the typical difficulties of H-mode: damaging edge localized modes (ELMs) and the narrow scrape-off layer (SOL) width. This is the promise of negative triangularity, an elegant and simple path to alleviating power exhaust while preserving plasma confinement.

The biggest deficiency is currently uncertainty. No tokamak in the world is designed to create negative triangularity plasmas and it has received a fraction of the theory community’s attention. In this webinar, through both theory and experiment, we will explore the knowns and unknowns of negative triangularity and evaluate its future as a power plant solution.

Justin Ball (chair) is a research scientist at the Swiss Plasma Center at EPFL in Lausanne, Switzerland. He earned his Masters from MIT in 2013 and his PhD in 2016 at Oxford University studying the effects of plasma shaping in tokamaks, for which he was awarded the European Plasma Physics PhD Award. In 2019, he and Jason Parisi published the popular science book, The Future of Fusion Energy. Currently, Justin is the principal investigator of the EUROfusion TSVV 2 project, a ten-person team evaluating the reactor prospects of negative triangularity using theory and simulation.

Alessandro Balestri is a PhD student at the Swiss Plasma Center (SPC) located within the École Polytechnique Fédérale de Lausanne (EPFL). His research focuses on using experiments and gyrokinetic simulations to achieve a deep understanding on how negative triangularity reduces turbulent transport in tokamak plasmas and how this beneficial effect can be optimized in view of a fusion power plant. He received his Bachelor and Master degrees in physics at the University of Milano-Bicocca where he carried out a thesis on the first gyrokinetic simulations for the negative triangularity option of the novel Divertor Tokamak Test facility.

Andrew “Oak” Nelson is an associate research scientist with Columbia University where he specializes in negative triangularity (NT) experiments and reactor design. Oak received his PhD in plasma physics from Princeton University in 2021 for work on the H-mode pedestal in DIII-D and has since dedicated his career to uncovering mechanisms to mitigate the power-handling needs faced by tokamak fusion pilot plants. Oak is an expert in the edge regions of NT plasmas and one of the co-leaders of the EU-US Joint Task Force on Negative Triangularity Plasmas. In addition to NT work, Oak consults regularly on various physics topics for Commonwealth Fusion Systems and heads several fusion-outreach efforts.

Tim Happel is the head of the Plasma Dynamics Division at the Max Planck Institute for Plasma Physics in Garching near Munich. His research centres around turbulence and tokamak operational modes with enhanced energy confinement. He is particularly interested in the physics of the Improved Energy Confinement Mode (I-Mode) and plasmas with negative triangularity. During his PhD, which he received in 2010 from the University Carlos III in Madrid, he developed a Doppler backscattering system for the investigation of plasma flows and their interaction with turbulent structures. For this work, he was awarded the Itoh Prize for Plasma Turbulence.

Haley Wilson is a PhD candidate studying plasma physics at Columbia University. Her main research interest is the integrated modelling of reactor-class tokamak core scenarios, with a focus on highly radiative, negative triangularity scenarios. The core modelling of MANTA is her first published work in this area, but her most recent manuscript submission expands the MANTA study to a broader operational space. She was recently selected for an Office of Science Graduate Student Research award, to work with Oak Ridge National Laboratory on whole device modelling of negative triangularity tokamaks using the FREDA framework.

Olivier Sauter obtained his PhD at CRPP-EPFL, Lausanne, Switzerland in 1992, followed by post-doc at General Atomics in 1992-93 and ITER-San Diego (1995/96), leading to the bootstrap current coefficients and experimental studies on Neoclassical tearing modes. He has been JET Task Force Leader, Eurofusion Research Topic Coordinator and recipient of the 2013 John Dawson Award for excellence in plasma physics research and nominated since 2016 as ITER Scientist Fellow in the area of integrated modelling. He is a senior scientist at SPC-EPFL, supervising several PhD theses, and active with AUG, DIII-D, JET, TCV, WEST focusing on real-time simulations and negative triangularity plasmas.

About this journal

Plasma Physics and Controlled Fusion is a monthly publication dedicated to the dissemination of original results on all aspects of plasma physics and associated science and technology.

Editor-in-chief: Jonathan Graves University of York, UK and EPFL, Switzerland.

 

The post Negative triangularity tokamaks: a power plant plasma solution from the core to the edge? appeared first on Physics World.

]]>
Webinar IOP Publishing's journal, Plasma Science and Technologies explores the knowns and unknowns of negative triangularity and evaluate its future as a power plant solution https://physicsworld.com/wp-content/uploads/2024/10/diii-d_cut.png
How a next-generation particle collider could unravel the mysteries of the Higgs boson https://physicsworld.com/a/how-a-next-generation-particle-collider-could-unravel-the-mysteries-of-the-higgs-boson/ Tue, 22 Oct 2024 10:00:59 +0000 https://physicsworld.com/?p=117149 Tulika Bose, Philip Burrows and Tara Shears discuss proposals for the next big particle collider

The post How a next-generation particle collider could unravel the mysteries of the Higgs boson appeared first on Physics World.

]]>
More than a decade following the discovery of the Higgs boson at the CERN particle-physics lab near Geneva in 2012, high-energy physics stands at a crossroads. While the Large Hadron Collider (LHC) is currently undergoing a major £1.1bn upgrade towards a High-Luminosity LHC (HL-LHC), the question facing particle physicists is what machine should be built next – and where – if we are to study the Higgs boson in unprecedented detail in the hope of revealing new physics.

Several designs exist, one of which is a huge 91 km circumference collider at CERN known as the Future Circular Collider (FCC). But new technologies are also offering tantalising alternatives to such large machines, notably a muon collider. As CERN celebrates its 70th anniversary this year, Michael Banks talks to Tulika Bose from the University of Wisconsin–Madison, Philip Burrows from the University of Oxford and Tara Shears from the University of Liverpool about the latest research on the Higgs boson, what the HL-LHC might discover and the range of proposals for the next big particle collider.

Tulika Bose, Philip Burrows and Tara Shears

What have we learnt about the Higgs boson since it was discovered in 2012?

Tulika Bose (TB): The question we have been working towards in the past decade is whether it is a “Standard Model” Higgs boson or a sister, or a cousin or a brother of that Higgs. We’ve been working really hard to pin it down by measuring its properties. All we can say at this point is that it looks like the Higgs that was predicted by the Standard Model. However, there are so many questions we still don’t know. Does it decay into something more exotic? How does it interact with all of the other particles in the Standard Model? While we’ve understood some of these interactions, there are still many more particle interactions with the Higgs that we don’t quite understand. Then of course, there is a big open question about how the Higgs interacts with itself. Does it, and if so, what is its interaction strength? These are some of the exciting questions that we are currently trying to answer at the LHC.

So the Standard Model of particle physics is alive and well?

TB: The fact that we haven’t seen anything exotic that has not been predicted yet tells us that we need to be looking at a different energy scale. That’s one possibility – we just need to go much higher energies. The other alternative is that we’ve been looking in the standard places. Maybe there are particles that we haven’t yet been able to detect that couple incredibly lightly to the Higgs.

Has it been disappointing that the LHC hasn’t discovered particles beyond the Higgs?

Tara Shears (TS): Not at all. The Higgs alone is such a huge step forward in completing our picture and understanding of the Standard Model, providing, of course, it is a Standard Model Higgs. And there’s so much more that we’ve learned aside from the Higgs, such as understanding the behaviour of other particles such as differences between matter and antimatter charm quarks.

How will the HL-LHC take our understanding of the Higgs forward?

TS: One way to understand more about the Higgs is to amass enormous amounts of data to look for very rare processes and this is where the HL-LHC is really going to come into its own. It is going to allow us to extend those investigations beyond the particles we’ve been able to study so far making our first observations of how the Higgs interacts with lighter particles such as the muon and how the Higgs interacts with itself. We hope to see that with the HL-LHC.

What is involved with the £1.1bn HL-LHC upgrade?

Philip Burrows (PB): The LHC accelerator is 27 km long and about 90% of it is not going to be affected. One of the most critical aspects of the upgrade is to replace the magnets in the final focus systems of the two large experiments, ATLAS and CMS. These magnets will take the incoming beams and then focus them down to very small sizes of the order of 10 microns in cross section. This upgrade includes the installation of brand new state-of-the-art niobium-tin (Nb3Sn) superconducting focusing magnets.

Engineer working on the HL-LHC upgrade in the LHC tunnel

What is the current status of the project?

PB: The schedule involves shutting down the LHC for roughly three to four years to install the high-luminosity upgrade, which will then turn on towards the end of the decade. The current CERN schedule has the HL-LHC running until the end of 2041. So there’s another 10 years plus of running this upgraded collider and who knows what exciting discoveries are going to be made.

TS: One thing to think about concerning the cost is that the timescale of use is huge and so it is an investment for a considerable part of the future in terms of scientific exploitation. It’s also an investment in terms of potential spin-out technology.

In what way will the HL-LHC be better than the LHC?

PB: The measure of the performance of the accelerator is conventionally given in terms of luminosity and it’s defined as the number of particles that cross at these collision points per square centimetre per second. That number is roughly 1034 with the LHC. With the high-luminosity upgrade, however, we are talking about making roughly an order of magnitude increase in the total data sample that will be collected over the next decade or so. So in other words, we’ve only got 10% or so of the total data sample so far in the bag. After the upgrade, there’ll be another factor of 10 data that will be collected and that is a completely new ball game in terms of the statistical accuracy of the measurements that can be made and the sensitivity and reach for new physics

Looking beyond the HL-LHC, particle physicists seem to agree that the next particle collider should be a Higgs factory – but what would that involve?

TB: Even at the end of the HL-LHC, there will be certain things we won’t be able to do at the LHC and that’s for several reasons. One is that the LHC is a proton–proton machine and when you’re colliding protons, you end up with a rather messy environment in comparison to the clean collisions between electrons and positrons and this allows you to make certain measurements which will not be possible at the LHC.

So what sort of measurements could you do with a Higgs factory?

TS:  One is to find out how much the Higgs couples to the electron. There’s no way we will ever find that out with the HL-LHC, it’s just too rare a process to measure, but with a Higgs factory, it becomes a possibility. And this is important not because it’s stamp collecting, but because understanding why the mass of the electron, which the Higgs boson is responsible for, has that particular value is of huge importance to our understanding of the size of atoms, which underpins chemistry and materials science.

PB: Although we often call this future machine a Higgs factory, it has far more uses beyond making Higgs bosons. If you were to run it at higher energies, for example, you could make pairs of top quarks and anti-top quarks. And we desperately want to understand the top quark, given it is the heaviest fundamental particle that we are aware of – it’s roughly 180 times heavier than a proton. You could also run the Higgs factory at lower energies and carry out more precision measurements of the Z and W bosons. So it’s really more than a Higgs factory. Some people say it’s the “Higgs and the electroweak boson factory” but that doesn’t quite roll off the tongue in the same way.

Artist concept of the International Linear Collider

While it seems there’s a consensus on a Higgs factory, there doesn’t appear to be one regarding building a linear or circular machine?

PB: There are two main designs on the table today – circular and linear. The motivation for linear colliders is due to the problem of sending electrons and positrons round in a circle – they radiate photons. So as you go to higher energies in a circular collider, electrons and positrons radiate that energy away in the form of synchrotron radiation. It was felt back in the late-1990s that it was the end of the road for circular electron–positron colliders because of the limitations of synchrotron radiation. But the discovery of the Higgs boson at 125 GeV was lighter than some had predicted. This meant that an electron–positron collider would only need a centre of mass energy of about 250 GeV. Circular electron–positron colliders then came back in vogue.

TS: The drawback with a linear collider is that the beams are not recirculated in the same way as they are in a circular collider. Instead, you have “shots”, so it’s difficult to reach the same volume of data in a linear collider. Yet it turns out that both of these solutions are really competitive with each other and that’s why they are still both on the table.

PB: Yes, while a circular machine may have two, or even four, main detectors in the ring, at a linear machine the beam can be sent to only one detector at a given time. So having two detectors means you have to share the luminosity, so each would get notionally half of the data. But to take an automobile analogy, it’s kind of like arguing about the merits of a Rolls-Royce versus a Bentley. Both linear and circular are absolutely superb, amazing options and some have got bells and whistles over here and others have got bells and whistles over there, but you’re really arguing about the fine details.

CERN seems to have put its weight behind the Future Circular Collider (FCC) – a huge 91 km circumference circular collider that would cost £12bn. What’s the thinking behind that?

TS: The cost is about one-and-a-half times that of the Channel Tunnel so it is really substantial infrastructure. But bear in mind it is for a facility that’s going to be used for the remainder of the century, for future physics, so you have to keep that longevity in mind when talking about the costs.

TB: I think the circular collider has become popular because it’s seen as a stepping stone towards a proton–proton machine operating at 100 TeV that would use the same infrastructure and the same large tunnel and begin operation after the Higgs factory element in the 2070s. That would allow us to really pin down the Higgs interaction with itself and it would also be the ultimate discovery machine, allowing us to discover particles at the 30–40 TeV scale, for example.

Artist concept of the Future Circular Collider

What kind of technologies will be needed for this potential proton machine?

PB: The big issue is the magnets, because you have to build very strong bending magnets to keep the protons going round on their 91 km circumference trajectory. The magnets at the LHC are 8 T but some think the magnets you would need for the proton version of the FCC would be 16–20 T. And that is really pushing the boundaries of magnet technology. Today, nobody really knows how to build such magnets. There’s a huge R&D effort going on around the world and people are constantly making progress. But that is the big technological uncertainty. Yet if we follow the model of an electron–positron collider first, followed by a proton–proton machine, then we will have several decades in which to master the magnet technology.

With regard to novel technology, the influential US Particle Physics Project Prioritization Panel, known as “P5”, called for more research into a muon collider, calling it “our muon shot”. What would that involve?

TB: Yes, I sat on the P5 panel that published a report late last year that recommended a course of action for US particle physics for the coming 20 years. One of those recommendations involves carrying out more research and development into a muon collider. As we already discussed, an electron–positron collider in a circular configuration suffers from a lot of synchrotron radiation. The question is if we can instead use a fundamental elementary particle that is more massive than the electron. In that case a muon collider could offer the best of both worlds, the advantages of an electron machine in terms of clean collisions but also reaching larger energies like a proton machine. However, the challenge is that the muon is very unstable and decays quickly. This means you are going to have to create, focus and collide them before they decay. A lot of R&D is needed in the coming decades but perhaps a decision could be taken on whether to go ahead by the 2050s.

And potentially, if built, it would need a tunnel of similar size to the existing LHC?

TB: Yes. The nice thing about the muon collider is that you don’t need a massive 90 km tunnel so it could actually fit on the existing Fermilab campus. Perhaps we need to think about this project in a global way because this has to be a big global collaborative effort. But whatever happens it is exciting times ahead.

  • Tulika Bose, Philip Burrows and Tara Shears were speaking on a Physics World Live panel discussion about the future of particle physics held on 26 September 2024. This Q&A is an edited version of the event, which you can watch online now

The post How a next-generation particle collider could unravel the mysteries of the Higgs boson appeared first on Physics World.

]]>
Feature Tulika Bose, Philip Burrows and Tara Shears discuss proposals for the next big particle collider https://physicsworld.com/wp-content/uploads/2024/10/Fractal-image-of-particle-fission-1238252527-shutterstock_sakkmesterke.jpg newsletter1
Objects with embedded spins could test whether quantum measurement affects gravity https://physicsworld.com/a/objects-with-embedded-spins-could-test-whether-quantum-measurement-affects-gravity/ Mon, 21 Oct 2024 17:26:22 +0000 https://physicsworld.com/?p=117585 Experiment could involve sending tiny diamonds through interferometers

The post Objects with embedded spins could test whether quantum measurement affects gravity appeared first on Physics World.

]]>
A new experiment to determine whether or not gravity is affected by the act of measurement has been proposed by theoretical physicists in the UK, India and the Netherlands. The experiment is similar to one outlined by the same group in 2017 to test whether or not two masses could become quantum-mechanically entangled by gravity, but the latest version could potentially be easier to perform.

An important outstanding challenge in modern theoretical physics is how to reconcile Einstein’s general theory of relativity – which describes gravity – with quantum theory, which describes just about everything else in physics.

“You can quantize gravity”  explains Daniel Carney of the Lawrence Berkeley National Laboratory in California, who was not involved in this latest research. However, he adds, “Gravitational wave detection is extremely quantum mechanical…[Gravity is] a normal quantum field theory and it works fine: it just predicts its own breakdown near black hole singularities and the Big Bang and things like that.”

Multiple experimental groups around the world seek to test whether the gravitational field can exist in non-classical states that would be fundamentally inconsistent with general relativity. If it could not, it would suggest that the reason quantum gravity breaks down at high energies is that gravity was not a quantum field. Performing these tests, however, is extraordinarily difficult because it requires objects that are both small enough to be detectably affected by the laws of quantum mechanics and yet massive enough for their gravitation to be measured.

Hypothetical analogy

Now, Sougato Bose of University College London and colleagues have proposed a test to determine whether or not the quantum state of a massive particle is affected by the detection of its mass. The measurement postulate in quantum mechanics says that it should be affected. Bose offers a hypothetical analogy: a photon passes through an interferometer, splitting its quantum wavefunction into two paths. Both paths interact equally with a mass in a delocalized superposition state. When the paths recombine, the output photon always emerges from the same port of the interferometer. If, however, the position of the mass is detected using another mass, the superposition collapses, the photon wavefuntion no longer interacts equally with the mass along each arm and the photon may consequently emerge from the other port.

However, this conceptually simple test is experimentally impracticable. For a mass to exert a gravitational field sufficient for another mass to detect it, it needs to be at least 10-14 kg – about a micron in size: “A micron-sized mass does not go into a quantum superposition, because a beamsplitter is a like a potential barrier, and a large mass doesn’t tunnel across a barrier of sufficient height,” explains Bose.

The solution to this problem, according to Bose and colleagues, is to use a small diamond crystal containing a single nitrogen vacancy centre – which contains a quantum spin. At the beginning of the experiment, a microwave pulse would initialize the vacancy into a spin superposition. The crystal would then pass through a Stern–Gerlach interferometer, where it would experience a magnetic field gradient.

Nitrogen vacancy centres are magnetic, so opposite spins would be deflected in opposite directions by the magnetic field gradient. Crystals with spins in superposition states would be deflected both ways simultaneously. The spins could then be inverted using another microwave pulse, causing the crystals to recombine with themselves without providing any information about which path they had taken. However, if a second interferometer were placed close enough to detect the gravitational field produced by the first mass, it would collapse the superposition, providing “which path” information and affecting the result measured by the first interferometer.

Stern–Gerlach interferometers

In 2017, Bose and colleagues proposed a similar setup to test whether or not the gravitational attraction between two masses could lead to quantum entanglement of spins in two Stern–Gerlach interferometers. However, Bose argues the new test could be easier to perform, as it would not require measurement of both spins simultaneously – simply for a second interferometer to perform some kind of gravitational detection of the first mass’s position. “If you see a difference, then you can immediately conclude that an update on a quantum measurement is happening.”

Moreover, Bose says that the inevitable invasiveness of a measurement is a different postulate of quantum mechanics from the formation of quantum entanglement between the two particles as a result of their interaction. In a hypothetical theory going beyond both quantum mechanics and general relativity, one of them could hold but not the other. The researchers are now investigating potential ways to implement their proposal in practice – something Bose predicts will take at least 15 years.

Carney sees some merit in the proposal. “I do like the one-sided test nature of things like this, and they are, in some sense, easier to execute,” he says; “but the reason these things are so hard is that I need to take a small system and measure its gravitational field, and this does not avoid that problem at all.”

A paper describing the research has been accepted for publication in Physical Review Letters and is available on the arXiv pre-print server.

The post Objects with embedded spins could test whether quantum measurement affects gravity appeared first on Physics World.

]]>
Research update Experiment could involve sending tiny diamonds through interferometers https://physicsworld.com/wp-content/uploads/2024/10/21-10-2024-massive-spins.jpg
Flocking together: the physics of sheep herding and pedestrian flows https://physicsworld.com/a/flocking-together-the-physics-of-sheep-herding-and-pedestrian-flows/ Mon, 21 Oct 2024 16:04:38 +0000 https://physicsworld.com/?p=117558 Learn how the science of crowd movements can help shepherds and urban designers

The post Flocking together: the physics of sheep herding and pedestrian flows appeared first on Physics World.

]]>

In this episode of Physics World Stories, host Andrew Glester shepherds you through the fascinating world of crowd dynamics. While gazing at a flock of sheep or meandering through a busy street, you may not immediately think of the physics at play – but there is much more than you think. Give the episode a listen to discover the surprising science behind how animals and people move together in large groups.

The first guest, Philip Ball, a UK-based science writer, explores the principles that underpin the movement of sheep in flocks. Insights from physics can even be used to inform herding tactics, whereby dogs are guided – usually through whistles – to control flocks of sheep and direct them towards a chosen destination. For even more detail, check out Ball’s recent Physics World feature “Field work – the physics of sheep, from phase transitions to collective motion“.

Next, Alessandro Corbetta, from Eindhoven University of Technology in the Netherlands, talks about his research on pedestrian flow that won him an Ig Nobel Prize. Corbetta explains how his research field is helping us understand – and manage – the movements of human crowds in bustling spaces such as museums, transport hubs and stadia. Plus, he shares how winning the Ig Nobel has enabled the research to reach a far broader audience than he initially imagined.

The post Flocking together: the physics of sheep herding and pedestrian flows appeared first on Physics World.

]]>
Learn how the science of crowd movements can help shepherds and urban designers Learn how the science of crowd movements can help shepherds and urban designers Physics World Flocking together: the physics of sheep herding and pedestrian flows full false 1:00:31 Podcasts Learn how the science of crowd movements can help shepherds and urban designers https://physicsworld.com/wp-content/uploads/2024/10/crowd-Frankfurt-472237637-iStock-Meinzahn_crop.jpg newsletter
Confused by the twin paradox? Maybe philosophy can help https://physicsworld.com/a/confused-by-the-twin-paradox-maybe-philosophy-can-help/ Mon, 21 Oct 2024 10:00:53 +0000 https://physicsworld.com/?p=117115 Robert P Crease discusses a puzzle that goes to the heart of science and philosophy

The post Confused by the twin paradox? Maybe philosophy can help appeared first on Physics World.

]]>
Once upon a time, a man took a fast rocket to a faraway planet. He soon missed his home world and took a fast rocket back. His twin sister, a physicist, was heartbroken, saying that they were no longer twins and that her sibling was now younger than she due to the phenomenon of time dilation.

But her brother, who was a philosopher, said that they had experienced time equally and so were truthfully the same age. And verily, physicists and philosophers have quarrelled ever since – physicists speaking of clocks and philosophers of time.

This scenario illustrates a famously counterintuitive implication of the special theory of relativity known as the “twin paradox”. It’s a puzzle that two physicists (Adam Frank and Marcello Gleiser) and a philosopher (Evan Thompson) have now taken up in a new book called The Blind Spot. The book shows how bound up philosophy and physics are and how its practitioners can so easily misunderstand each other.

Got time?

Albert Einstein implicitly proposed time dilation in his famous 1905 paper “On the electrodynamics of moving bodies” (Ann. Phys. 17 891), which inaugurated the special theory of relativity. If two identical clocks are synchronized and one then travels at a speed relative to the other and back, the theory implied, then when the clocks are compared one would see a difference in the time registered by the two. The clock that had travelled and returned would have run slower and therefore be “younger”.

For humans to experience the world, time cannot be made of abstract instants stuck together

At around the same time that Einstein was putting together the theory of relativity, the French philosopher Henri Bergson (1859–1941) was working out a theory of time. In Time and Free Will, his doctoral thesis published in 1889, Bergson argued that time, considered most fundamentally, does not consist of dimensionless and identical instants.

For humans to experience the world, time cannot be made of abstract instants stuck together. Humans live in a temporal flow that Bergson called “duration”, and only duration makes it possible to conceive and measure a “clock-time” consisting of instants. Duration itself cannot be measured; any measurement presupposes duration.

These two accounts of time provided the perfect opportunity to display the relation of physics and philosophy. On the one hand was Einstein’s special theory of relativity, which relates measured times of objects moving with respect to each other; on the other was Bergson’s account of the dependence of measured times on duration.

Unfortunately, as the authors of The Blind Spot describe, the opportunity was squandered by off-hand comments during an impromptu exchange between Einstein and Bergson. The much-written-about encounter, which took place in Paris in 1922, saw Einstein speaking to the Paris Philosophical Society, with Bergson in the audience.

Coaxed into speaking at a slow spot in the meeting, Bergson mentioned some ideas from his upcoming book Duration and Simultaneity. While relativity may be complete as a mathematical theory, he said, it depends on duration, or the experience of time itself, which escapes measurement and indeed makes “clock-time” possible.

Einstein was dismissive, calling Bergson’s notion “psychological”. To Einstein, duration is an emotion, the response of a human being to a situation rather than part and parcel of what it means to experience a situation.

Mutual understanding was still possible, had Einstein and Bergson pursued the issue with rigorous and open minds. But the occasion came to an unnecessary standstill when Bergson slipped up in remarks about the twin paradox.

Bergson argued that duration underlies the experience of each twin and neither would experience any dilation of it; neither would experience time as “slowing down” or “speeding up”. This much was true. But Bergson went on to say that duration was therefore a continuum, and any intervals of time in it are abstractions made possible by duration.

Bergson thought that duration is single. Moreover, the reference frames of the twins are symmetric, for the twins are in reference frames moving with respect to each other, not with respect to an absolute frame or universal time. An age difference between the twins, Bergson thought, is purely mathematical and only on their clocks; it might show up when the twins are theorizing, but not in real life.

This was a mistake; Einstein’s theory does indeed entail that the twins have aged differently. One twin has switched directions, jumping from a frame moving away to one in the reverse direction. Frame-switching requires acceleration, and the twin who has undergone it has broken the symmetry. Einstein and other physicists, noting Bergson’s misunderstanding of relativity, then felt legitimated to dismiss Bergson’s idea of duration and of how measurement depended on it.

The Blind Spot uses philosophical arguments to show how specific paradoxes and problems arise in science when the role of experience is overlooked

Many philosophers, from Immanuel Kant to Alfred North Whitehead, have demonstrated that scientific activity arises from and depends on something like duration. What is innovative about The Blind Spot is that it uses such philosophical arguments to show how specific paradoxes and problems arise in science when the role of experience is overlooked.

“We must live the world before we conceptualize it,” the authors say. Their book title invokes an analogy with the optic nerve, which makes seeing possible only by creating a blind spot in the visual field. Similarly, the authors write, aspects of experience such as duration make things like measurement possible only by being invisible, even to scientific data-taking and theorizing. Duration cannot itself be measured and precedes being able to practise science – yet it is fundamental to science.

The critical point

The Blind Spot does not eliminate what’s enigmatic about the twin paradox but shows more clearly what that enigma is. An everyday assumption about time is that it’s Newtonian: time is universal and can be measured as flowing everywhere the same. Bergson found that this is wrong, for duration allows humans to interact with the world before they can measure time and develop theories about it. But it turns out that there is no one duration, and relativity theory captures the structure of the relations between durations.

The two siblings may be very different, but with help they can understand each other.

The post Confused by the twin paradox? Maybe philosophy can help appeared first on Physics World.

]]>
Opinion and reviews Robert P Crease discusses a puzzle that goes to the heart of science and philosophy https://physicsworld.com/wp-content/uploads/2024/10/2024-10-CP-clocks-bendy-time-2016598412-Shutterstock_klee048.jpg newsletter
Physics-based model helps pedestrians and cyclists avoid city pollution https://physicsworld.com/a/physics-based-model-helps-pedestrians-and-cyclists-avoid-city-pollution/ Mon, 21 Oct 2024 08:19:46 +0000 https://physicsworld.com/?p=117565 New immersive reality method could also inform policymakers and urban planners about risks, say researchers

The post Physics-based model helps pedestrians and cyclists avoid city pollution appeared first on Physics World.

]]>
Computer rendering of a neon-blue car with airflow lines passing over it and a cloud of emissions trailing behind it, labelled "brake dust ejection" near the front wheels and "tyre and road dispersion" in the middle

Scientists at the University of Birmingham, UK, have used physics-based modelling to develop a tool that lets cyclists and pedestrians visualize certain types of pollution in real time – and take steps to avoid it. The scientists say the data behind the tool could also guide policymakers and urban planners, helping them make cities cleaner and healthier.

As well as the exhaust from their tailpipes, motor vehicles produce particulates from their tyres, their brakes and their interactions with the road surface. These particulate pollutants are known health hazards, causing or contributing to chronic conditions such as lung disease and cardiovascular problems. However, it is difficult to track exactly how they pass from their sources into the environment, and the relationships between pollution levels and factors like vehicle type, speed and deceleration are hard to quantify.

Large-eddy simulations

In the new study, which is detailed in the Royal Society Open Science Journal, researchers led by Birmingham mechanical engineer Jason Stafford developed a tool that answers some of these questions in a way that helps both members of the public and policymakers to manage the associated risks. Among other findings, they showed that the risk of being exposed to non-exhaust pollutants from vehicles is greatest when the vehicles brake – for example at traffic lights, zebra crossings and bus stops.

“We used large-eddy simulations to predict turbulent air flow around road vehicles for cruising and braking conditions that are observed in urban environments,” Stafford explains. “We then coupled these to a set of pollution transport (fluid dynamics) equations, allowing us to predict how harmful particle pollutants from the different emission sources (for example, brakes, tyres and roads) are transported to the wider pedestrian/cyclist environment.”

A visible problem

The researchers’ next goal was to help people “see” these so-called PM2.5 pollutants (which, at 2.5 microns or less in diameter, cannot be detected with the naked eye) in their everyday world without alarming them unduly and putting them off walking and cycling in urban spaces altogether. To this end, they developed an immersive reality tool that makes the pollutants visible in space and time, allowing users to observe the safest distances for themselves. They then demonstrated this tool to members of the general public in the centre of Birmingham, which is the UK’s second most populous city and its second largest contributor to PM2.5 emissions from brake and tyre wear.

The people who tried the tool were able to visualize the pollution data and identify pollutant sources. They could also understand how to navigate urban spaces to reduce their exposure to these pollutants, Stafford says.

“It was very exciting to find that this approach was effective no matter what a person’s pre-existing knowledge of non-exhaust emissions was, or on their educational background,” he tells Physics World.

Clear guidance and a framework via which to convey complex physicochemical data

Stafford says the team’s work provides clear guidance to governments, city councils and urban planners on the interface between road transport emissions and public health. It also creates a framework for conveying complex physicochemical data in a way that members of the public and decision-makers can understand, even if they lack scientific training.

“This is a crucial component if we are to help society,” Stafford says. Longitudinal studies, he adds, would help him and his colleagues understand whether the method actually leads to behavioural change for vehicle drivers or pedestrians.

Looking forward, the Birmingham team aims to reduce the computing complexity required to build the model. At present, the numerical simulations are intensive and require high-performance facilities to solve the governing equations and produce data. “These constraints limited us to constructing a one-way virtual environment,” Stafford says.  “Techniques that would provide close to real-time computing may open up two-way interactions that allow users to quickly change their environment and observe how this affects their exposure to pollution.”

Stafford says the team’s physics-informed immersive approach could also be extended beyond non-exhaust emissions to, for example, visualize indoor air quality and how it interacts with the built environment, where computational modelling tools are regularly used to inform thermal comfort and ventilation.

The post Physics-based model helps pedestrians and cyclists avoid city pollution appeared first on Physics World.

]]>
Research update New immersive reality method could also inform policymakers and urban planners about risks, say researchers https://physicsworld.com/wp-content/uploads/2024/10/2024-10-21-brake-pollution-2.jpg newsletter
Liquid-crystal bifocal lens excels at polarization and edge imaging https://physicsworld.com/a/liquid-crystal-bifocal-lens-excels-at-polarization-and-edge-imaging/ Sat, 19 Oct 2024 14:04:19 +0000 https://physicsworld.com/?p=117562 Applied voltage adjusts intensity at twin focal points

The post Liquid-crystal bifocal lens excels at polarization and edge imaging appeared first on Physics World.

]]>
A bifocal lens that can adjust the relative intensity of its two focal points using an applied electric field has been developed by Fan Fan and colleagues at China’s Hunan University. The lens features a bilayer structure made of liquid crystal materials. Each layer responds differently to the applied electric field, splitting incoming light into oppositely polarized beams.

Bifocal lenses work by combining two distinct lens segments into one, each with a different focal length – the distance from the lens to its focal point. This gives the lens two distinct focal lengths.

While bifocals are best known for their use in vision correction, recent advances in optical materials are expanding their application in new directions. In their research, Fan’s team recognized how recent progress in holography held the potential for further innovations in the field.

Inspired by hologaphy

“Researchers have devised many methods to improve the information capacity of holographic devices based on multi-layer structures,” Fan describes. “We thought this type of structure could be useful beyond the field of holographic displays.”

To this end, the Hunan team investigated how layers within these structures could manipulate the polarization states of light beams in different ways. To achieve this, they fabricated their bifocal lens from liquid crystal materials.

Liquid crystals comprise molecules that can flow like in a liquid, but can maintain specific orientations – like molecules in a crystal. These properties make liquid crystals ideal for modulating light.

Bilayer benefits

“Most liquid-crystal-based devices are made from single-layer structures, but this limits light-field modulation to a confined area,” Fan explains. “To realize more complex and functional modulation of incident light, we used bilayer structures composed of a liquid crystal cell and a liquid crystal polymer.”

In the cell, the liquid crystal layer is sandwiched between two transparent substrates, creating a 2D material. When a voltage is applied across the cell, the molecules align along the electric field. In contrast, the molecules in the liquid-crystal polymer are much larger, and their alignment is not affected by the applied voltage.

Fan’s team took advantage of these differences, finding that each layer modulates circularly polarized light in different ways. As a result, the lens could split the light into left-handed and right-handed circularly polarized components. Crucially, each of these components is focused at a different point. By adjusting the voltage across the lens, the researchers could easily control the difference in intensity at the two focal points.

In the past, achieving this kind of control would have only been possible by the mechanical rotation of the lens layers with respect to each other. The new design is much simpler and makes it easier and more efficient to adjust the intensities at the two focal points.

Large separation distance

To demonstrate this advantage, Fan’s team used their bifocal lens in two types of imaging experiments. One was polarization imaging, which analyses differences in how left-handed and right-handed circularly polarized light interact with a sample. This method typically requires a large separation distance between focal points.

They also tested the lens in edge imaging, which enhances the clarity of boundaries in images. This requires a much smaller separation distance between focal points.

By adjusting the geometric configurations within the bilayer structure, Fan’s team achieved the tight control over the separation between the focal points. In both polarization and edge imaging experiments, their bifocal lens did very well, closely matching the theoretical performance predicted by their simulations. These promising results suggest that the lens could have a wide range of applications in optical systems.

Based on their initial success, Fan and colleagues are now working to reduce the manufacturing costs of their multi-layer bifocal lenses. If successful, this would allow the lens to be used in a wide range of research applications.

“We believe that the light control mechanism we created using the multilayer structure could also be used to design other optical devices, including holographic devices and beam generators, or for optical image processing,” Fan says.

The lens is described in Optics Letters.

The post Liquid-crystal bifocal lens excels at polarization and edge imaging appeared first on Physics World.

]]>
Research update Applied voltage adjusts intensity at twin focal points https://physicsworld.com/wp-content/uploads/2024/10/18-10-2024-LCD-bifocal-lens.jpg
Century-old photoelectric effect inspires a new search for quantum gravity https://physicsworld.com/a/century-old-photoelectric-effect-inspires-a-new-search-for-quantum-gravity/ Fri, 18 Oct 2024 14:46:05 +0000 https://physicsworld.com/?p=117554 Proposed experiment could demonstrate absorption and emission of individual gravitons

The post Century-old photoelectric effect inspires a new search for quantum gravity appeared first on Physics World.

]]>
According to quantum mechanics, our universe is like a Lego set. All matter particles, as well as particles such as light that act as messengers between them, come in discrete blocks of energy. By rearranging these blocks, it is possible to build everything we observe around us.

Well, almost everything. Gravity, a crucial piece of the universe, is missing from the quantum Lego set. But while there is still no quantum theory of gravity, the challenge of detecting its signatures now looks a little more manageable thanks to a proposed experiment that takes inspiration from the photoelectric effect, which Albert Einstein used to prove the quantum nature of light more than a century ago.

History revisited

Quantum mechanics and general relativity each, independently, provide accurate descriptions of our universe – but only at short and long distances, respectively. Bridging the two is one of the deepest problems facing physics, with tentative theories approaching it from different perspectives.

However, all efforts of describing a quantum theory of gravity agree on one thing: if gravity is quantum, then it, too, must have a particle that carries its force in discrete packages, just as other forces do.

In the latest study, which is described in Nature Communications, Germain Tobar and Sreenath K Manikandan of Sweden’s Stockholm University, working with Thomas Beitel and Igor Pikovski of the Stevens Institute of Technology, US, propose a new experiment that could show that gravity does indeed come in these discrete packages, which are known as gravitons.

The principle behind their experiment parallels that of the photoelectric effect, in which light shining on a material causes it to emit discrete packets of energy, one particle at a time, rather than in a continuous spectrum. Similarly, the Stockholm-Stevens team proposes using massive resonant bars that have been cooled and tuned to vibrate if they absorb a graviton from an incoming gravitational wave. When this happens, the column’s quantum state would undergo a transition that can be detected by a quantum sensor.

“We’re playing the same game as photoelectric effect, except instead of photons – quanta of light – energy is exchanged between a graviton and the resonant bar in discrete steps,” Pikovski explains.

“Still hard, but not as hard as we thought”

While the idea of using resonant bars to detect gravitational waves dates back to the 1960s, the possibility of using it to detect quantum transitions is new. “We realized if you change perspectives and instead of measuring change in position, you measure change in energy in the quantum state, you can learn more,” Pikovski says.

A key driver of this perspective shift is the Laser Interferometer Gravitational-wave Observatory, or LIGO, which detects gravitational waves by measuring tiny deviations in the length of the interferometer’s arms as the waves pass through them. Thanks to LIGO, Pikovski says, “We not only know when gravitational waves are detected but also [their] properties such as frequency.”

Aerial photo of the Hanford detector site of LIGO, showing a building in the centre of the image and two long interferometer arms stretching into the distance of a desert-like landscape

In their study, Pikovski and colleagues used LIGO’s repository of gravitational-wave data to narrow down the frequency and energy range of typical gravitational waves. This allowed them to calculate the type of resonant bar required to detect gravitons. LIGO could also help them cross-correlate any signals they detect.

“When these three ingredients—resonant bar as a macroscopic quantum detector, detecting quantum transitions using quantum sensors and cross-correlating detection with LIGO— are taken altogether, it turns out detecting a graviton is still hard but not as hard as we thought,” Pikovski says.

Within reach, theoretically

For most known gravitational wave events, the Stockholm-Stevens scientists say that the number of gravitons their proposed device could detect is small. However, for neutron star-neutron star collisions, a quantum transition in reasonably-sized resonant bars could be detected for one in every three collisions, they say.

Carlo Rovelli, a theorist at the University of Aix-Marseille, France who was not involved in the study, agrees that “the goal of quantum gravity observations seem within reach”. He adds that the work “shows that the arguments claiming that it should be impossible to find evidence for single-graviton exchange were wrong”.

Frank Wilczek, a theorist at the Massachusetts Institute of Technology (MIT), US who was also not involved in the study, is similarly positive. For a consistent theory that respects quantum mechanics and general relativity, he says, “it can be interpreted that this experiment would prove the existence of gravitons and that the gravitational field is quantized”.

So when are we going to start detecting?

On paper, the experiment shows promise. But actually building a massive graviton detector with measurable quantum transitions will be anything but easy.

Part of the reason for this is that a typical gravitational wave shower can consist of approximately zillions of gravitons. Just as the pattern of individual raindrops can be heard as they fall on a tin roof, carefully prepared resonant bars should, in principle, be able to detect individual incoming gravitons within these gravitational wave showers.

But for this to happen, the bars must be protected from noise and cooled down to their least energetic state. Otherwise, such tiny energy changes may be impossible to observe.

Vivishek Sudhir, an expert in quantum measurements at MIT who was not part of the research team, describes it as “an enormous practical challenge still, one that we do not currently have the technology for”.

Similarly, quantum sensing has been achieved in resonators, but only at much smaller masses than the tens of kilograms or more required to detect gravitons. The team is, however, working on a potential solution: Tobar, a PhD student at Stockholm and the study’s lead author, is devising a version of the experiment that would send the signal from the bars to smaller masses using transducers – in effect, meeting the quantum sensing challenge in the middle. “It’s not something you can do today, but I would guess we can achieve it within a decade or two,” Pikovski says.

Sudhir agrees that quantum measurements and experiments are rapidly progressing. “Keep in mind that only 15 years ago, nobody imagined that tangibly macroscopic systems would even be prepared in quantum states,” he says. “Now, we can do that.”

The post Century-old photoelectric effect inspires a new search for quantum gravity appeared first on Physics World.

]]>
Research update Proposed experiment could demonstrate absorption and emission of individual gravitons https://physicsworld.com/wp-content/uploads/2024/10/Pikovski_SingleGravitonDetector_web.jpg newsletter
Passing the torch: The “QuanTour” light source marks the International Year of Quantum https://physicsworld.com/a/passing-the-torch-the-quantour-light-source-marks-the-international-year-of-quantum/ Thu, 17 Oct 2024 15:25:15 +0000 https://physicsworld.com/?p=117483 Katherine Skipper visits the Cavendish laboratory in Cambridge to catch a quantum light source that’s touring Europe for the International Year of Quantum

The post Passing the torch: The “QuanTour” light source marks the International Year of Quantum appeared first on Physics World.

]]>
Earlier this year, the start of the Paris Olympics was marked by the ceremonial relay of the Olympic torch. You’ll have to wait until 2028 for the next Olympics, but in the meantime there’s the International Year of Quantum (IYQ) in 2025, which also features a torch relay. In keeping with the quantum theme, however, this light source is very, very small.

The light source is currently on tour around 12 different quantum labs around Europe as part of IYQ and last week I visited the Cavendish Laboratory at the University of Cambridge, UK, where it was on stop eight of what’s dubbed QuanTour. It’s a project of the German Physical Society (DPG), organised by Doris Reiter from the Technical University of Dortmund and Tobias Heindel from the Technical University of Berlin.

According to Mete Atatüre, who leads the Quantum Optical Materials and Systems (QOMS) group at Cambridge and in whose lab QuanTour is based, one of the project’s aims is to demystify quantum science. “I think what we need to do, especially in the year of quantum, is to have a change of style.” he says. “So that we focus not on the weirdness of quantum but on what it can actually bring us.”

Indeed, though it requires complex optical apparatus and must be cooled with helium, the Quantour light source itself looks like an ordinary computer chip. It is in fact an array of quantum dots, each emitting single photons when illuminated by a laser. “It’s really meant to show off that you can use quantum dots as a plug in light source” explains Christian Schimpf, a postdoc in the Quantum Engineering Group in Cambridge, who showed me around the lab where QuanTour is spending its time in England.

The light source is right at home in the Cambridge lab, where quantum dots are a key area of research. The team is working on networking applications, where the goal is to transmit quantum information over long distances, preferably using existing fibre-optic networks. In fibre optics, the signal is amplified regularly along the route, but quantum networks can’t do this – the so-called “no-cloning” theorem means it’s impossible to create a copy of an unknown quantum state.

The solution is to create a long-distance communication link from many short-distance entanglements. The challenge for scientists in the Cambridge lab, Schimpf explains, is to build ensembles of entangled qubits that can “store quantum bits on reasonable time scales.” He’s talking about just a few milliseconds, but this is still a significant challenge, requiring cooling close to absolute zero and precise control over the fabrication process.

Elsewhere in the Cavendish Laboratory, scientists in the quantum group are investigating platforms for quantum sensing, where changes to single quantum states are used to measure tiny magnetic fields. Attractive materials for this include diamond and some 2D materials, where quantum spin states trapped at crystal defects can act as qubits. Earlier this year Physics World spoke to Hannah Stern, a former postdoc in Atatüre’s group, who won an award from the Institute of Physics for her research on quantum sensing with hexagonal boron nitride, which she began in Cambridge.

I also spoke to Dorian Gangloff, head of the quantum engineering group, who described his recent work on nonlinear quantum optics. Nonlinear optical effects are generally only observed with high-power light sources such as lasers, but Gangloff’s team is trying to engineer these effects in single photons. Nonlinear quantum optics could be used to shift the frequency of a single photon or even split it into an entangled pair.

When asked about the existing challenges of rolling out quantum technologies, Atatüre points out that when quantum mechanics was first conceived, the belief was: “Of course we’ll never be able to see this effect, but if we did, what would the experimental result look like?” Thanks to decades of work however, it is indeed possible to see quantum science in action, as I did In Cambridge. Atatüre is confident that researchers will be able to take the next step – building useful technologies with quantum phenomena.

At the end of this week, QuanTour’s time in Cambridge will be up. If you missed it, you’ll have to head to University College Cork in Ireland, where it will be spending the next leg of its journey with the group of Emanuele Pelucchi.

 

The post Passing the torch: The “QuanTour” light source marks the International Year of Quantum appeared first on Physics World.

]]>
Blog Katherine Skipper visits the Cavendish laboratory in Cambridge to catch a quantum light source that’s touring Europe for the International Year of Quantum https://physicsworld.com/wp-content/uploads/2024/10/20241010_105442-scaled.jpg
Data-intensive PhDs at LIV.INNO prepare students for careers outside of academia https://physicsworld.com/a/data-intensive-phds-at-liv-inno-prepare-students-for-careers-outside-of-academia/ Thu, 17 Oct 2024 10:38:15 +0000 https://physicsworld.com/?p=117426 This podcast is sponsored by LIV.INNO, the Liverpool Centre for Doctoral Training for Innovation in Data-Intensive Science

The post Data-intensive PhDs at LIV.INNO prepare students for careers outside of academia appeared first on Physics World.

]]>
LIV.INNO, Liverpool Centre for Doctoral Training for Innovation in Data-Intensive Science, offers students fully-funded PhD studentships across a broad range of research projects from  medical physics to quantum computing. All students receive training in high-performance computing, data analysis, and machine learning and artificial intelligence. Students also receive career advice and training in project management, entrepreneurship and communication skills – preparing them for careers outside of academia.

This podcast features the accelerator physicist Carsten Welsch, who is head of the Accelerator Science Cluster at the University of Liverpool and director of LIV.INNO, and the computational astrophysicist Andreea Font  who is a deputy director of LIV.INNO.

They chat with Physics World’s Katherine Skipper about how LIV.INNO provides its students with a wide range of skills and experiences – including a six-month industrial placement.

This podcast is sponsored by LIV.INNO, the Liverpool Centre for Doctoral Training for Innovation in Data-Intensive Science.

LIVINNO CDT logo

The post Data-intensive PhDs at LIV.INNO prepare students for careers outside of academia appeared first on Physics World.

]]>
Podcasts This podcast is sponsored by LIV.INNO, the Liverpool Centre for Doctoral Training for Innovation in Data-Intensive Science https://physicsworld.com/wp-content/uploads/2024/10/Andreea-and-Carsten-new.jpg newsletter
Operando NMR methods for redox flow batteries and ammonia synthesis https://physicsworld.com/a/operando-nmr-methods-for-redox-flow-batteries-and-ammonia-synthesis/ Thu, 17 Oct 2024 09:03:41 +0000 https://physicsworld.com/?p=114973 Available to watch now, The Electrochemical Society in partnership with BioLogic, explore the application of magnetic resonance methods for studying redox flow batteries and ammonia synthesis

The post Operando NMR methods for redox flow batteries and ammonia synthesis appeared first on Physics World.

]]>

Magnetic resonance methods, including nuclear magnetic resonance (NMR) and electron paramagnetic resonance (EPR), are non-invasive, atom-specific, quantitative, and capable of probing liquid and solid-state samples. These features make magnetic resonance ideal tools for operando measurement of an electrochemical device, and for establishing structure-function relationships under realistic condition.

The first part of the talk presents how coupled inline NMR and EPR methods were developed and applied to unravel rich electrochemistry in organic molecule-based redox flow batteries. Case studies performed on low-cost and compact bench-top systems are reviewed, demonstrating that a bench-top NMR has sufficient spectral and temporal resolution for studying degradation reaction mechanisms, monitoring the state of charge, and crossover phenomena in a working RFB. The second part of the talk presents new in situ NMR methods for studying Li-mediated ammonia synthesis, and the direct observation of lithium plating and its concurrent corrosion, nitrogen splitting on lithium metal, and protonolysis of lithium nitride. Based on these insights, potential strategies to optimize the efficiencies and rates of Li-mediated ammonia synthesis are discussed. The goal is to demonstrate that operando NMR and EPR methods are powerful and general and can be applied for understanding the electrochemistry underpinning various applications.

An interactive Q&A session follows the presentation.

Evan Wenbo Zhao is a tenured assistant professor at the Magnetic Resonance Research Center at Radboud Universiteit Nijmegen in the Netherlands. His core research focuses on developing operando/in situ NMR methods for studying electrochemical storage and conversion chemistries, including redox flow batteries, electrochemical ammonia synthesis, carbon-dioxide reduction, and lignin oxidation. He has led projects funded by the Dutch Research Council Open Competition Program, Bruker Collaboration, Radboud-Glasgow Collaboration Grants, the Mitacs Globalink Research Award, and others. After receiving his BS from Nanyang Technological University, he completed a PhD in chemistry with Prof. Clifford Russell Bowers at the University of Florida. Evan’s postdoc was with Prof. Dame Clare Grey at the Yusuf Hamied Department of Chemistry at the University of Cambridge.

 

The post Operando NMR methods for redox flow batteries and ammonia synthesis appeared first on Physics World.

]]>
Webinar Available to watch now, The Electrochemical Society in partnership with BioLogic, explore the application of magnetic resonance methods for studying redox flow batteries and ammonia synthesis https://physicsworld.com/wp-content/uploads/2024/06/2024-11-13-ECS-image.jpg
US Department of Energy announces new Fermilab contractor https://physicsworld.com/a/us-department-of-energy-announces-new-fermilab-contractor/ Thu, 17 Oct 2024 08:00:30 +0000 https://physicsworld.com/?p=117431 Yet some see little change in the selection of Fermi Forward Discovery Group

The post US Department of Energy announces new Fermilab contractor appeared first on Physics World.

]]>
A consortium of universities and companies has been awarded the contract to manage and operate Fermilab, the US’s premier particle-physics facility. The US Department of Energy (DOE) announced on 1 October that the new contractor, Fermi Forward Discovery Group, LLC (FFDV), will take over operation of the lab from 1 January 2025.

FFDV consists of Fermilab’s current contractor – the University of Chicago and Universities Research Association (URA), a consortium of research universities – as well as the industrial firms Amentum Environment & Energy, Inc. and Longenecker & Associates. The conglomerate’s initial contract will last for five years but “exemplary performance” running the lab could extend that by a further decade.

“We are honoured that the Department of Energy has selected FermiForward to manage Fermilab after a rigorous contract process,” University of Chicago president Paul Alivisatos told Physics World. “FermiForward represents a new approach that brings together the best parts of Fermilab with two new industry partners, who bring broad expertise from a deep bench from across the DOE complex.”

Alivisatos notes that the inclusion of Amentum and Longenecker will strengthen the management capability of the consortium given the companies’ “exemplary record of accomplishment in project management, operations, and safety.” Longenecker, a female-led company based in Las Vegas, is part of the managerial teams currently running Sandia, Los Alamos, and Savannah River national laboratories. Virginia-based Amentum, meanwhile, has a connection to Fermilab through Greg Stephens, its former vice president, who is now Fermilab’s chief operating officer.

The choice of the new contractor comes after Fermilab has faced a series of operating and budget challenges. In 2021, the institution scored low marks on a DOE assessment of its operations. A year later, complaints emerged that the lab’s leadership was restricting access to its campus despite reduced concern about the spread of COVID-19. In July, a group of Fermilab staff whistleblowers claimed that a series of problems indicated that the lab was “doomed” without a change of management. And in late August, the lab underwent a period of limited operations to reduce a budgetary shortfall.

The Fermilab staff whistleblowers, however, see little change in the DOE’s selection of FFDV. Indeed, the key members of FFDV – the University of Chicago and URA – made up Fermi Research Alliance, the previous contractor that has overseen Fermilab’s operations since 2007.

“We understand that the only reaction by DOE to our investigative report is that of coaching the University of Chicago’s teams that steward the university’s relationships with the national labs,” the group wrote in a letter to Geraldine Richmond, DOE’s Undersecretary for Science and Innovation, which has been seen by Physics World. “By doing so, the DOE is once again showing that it is for the status-quo.”

The DOE hasn’t revealed how many bids it received or other details about the contract award. In a statement to Physics World it noted that it “cannot discuss the contract at the current time because of business sensitive information”. Fermilab declined to comment for the story.

The post US Department of Energy announces new Fermilab contractor appeared first on Physics World.

]]>
News Yet some see little change in the selection of Fermi Forward Discovery Group https://physicsworld.com/wp-content/uploads/2024/10/24-0129-05.jpg newsletter1
Mountaintop observations of gamma-ray glow could shed light on origins of lightning https://physicsworld.com/a/mountaintop-observations-of-gamma-ray-glow-could-shed-light-on-origins-of-lightning/ Wed, 16 Oct 2024 17:10:28 +0000 https://physicsworld.com/?p=117476 Electric fields near Earth’s surface are stronger than expected

The post Mountaintop observations of gamma-ray glow could shed light on origins of lightning appeared first on Physics World.

]]>
Research done at a mountaintop cosmic-ray observatory in Armenia has shed new light on how thunderstorms can create flashes of gamma rays by accelerating electrons. Further study of the phenomenon could answer important questions about the origins of lightning.

This accelerating process is called thunderstorm ground enhancement (TGE), whereby thunderstorms create strong electric fields that accelerate atmospheric free electrons to high energies. These electrons then collide with air molecules, creating a cascade of secondary charged particles. When charged particles are deflected in these collisions they emit gamma rays in a process called bremsstrahlung.

The flashes of gamma rays are called “gamma-ray glows” and are some of the strongest natural sources of high-energy radiation on Earth.
Physicist Joseph Dwyer at the University of New Hampshire, who was not involved in the Armenian study says, “When you think of gamma rays, you usually think of black holes or solar flares. You don’t think of inside the Earth’s troposphere as being a source of gamma rays, and we’re still trying to understand this.”

Century-old mystery

Indeed, the effect was first predicted a century ago by Nobel laureate Charles Wilson, who is best known for his invention of the cloud chamber radiation detector. However, despite numerous attempts over the decades, early researchers were unable to detect this acceleration.

This latest research was led by Ashot Chiliangrian, who is director of the Cosmic Ray Division of Armenia’s Yerevan Physics Institute. The measurements were made at a research station located 3200 m above sea level on Armenia’s Mount Aragats.

Chiliangrian says, “There were some people that were convinced that there was no such effect. But now, on Aragats, we can measure electrons and gamma rays directly from thunderclouds.”

In the summer of 2023,  Chiliangrian and colleagues detected gamma rays, electrons, neutrons and other particles from intense TGE events. By analysing 56 of those events, the team has now concluded that the electric fields involved were close to Earth’s surface.

Though Aragats is not the first facility to confirm the existence of these gamma-ray glows, it is uniquely well-situated, sitting at a high altitude in an active storm region. This allows measurements to be made very close to thunderclouds.

Energy spectra

Instead of measuring the electric field directly, the team inferred its strength by analysing the energy spectra of electrons and gamma rays detected during TGE events.

By comparing the detected radiation to well-understood simulations of electron acceleration, the team deduced the strength of the electric field responsible for the particle showers as 2.1 kV/cm.

This field strength is substantially higher than what has been observed in most previous studies of thunderstorms, which typically use weather balloons to take direct field measurements.

The fact that such a high field can exist near the ground during a thunderstorm challenges previous assumptions about the limits of electric fields in the atmosphere.

Moreover, this discovery could help solve one of the biggest mysteries in atmospheric science: how lightning is initiated. Despite decades of research, scientists have been unable to measure electric fields strong enough to break down the air and create the initial spark of lightning.

“These are nice measurements and they’re one piece of the puzzle,” says Dwyer, “What these are telling us is that these gamma ray glows are so powerful and they’re producing so much ionizing radiation that they’re partially discharging the thunderstorm.”

“As the thunderstorms try to charge up, these gamma rays turn on and cause the field to kind of collapse,” Dwyer explains, comparing it to stepping on bump in a carpet. “You collapse it in one place but it pops up in another, so this enhancement may be enough to help the lightning get started.”

The research is described in Physics Review D.

The post Mountaintop observations of gamma-ray glow could shed light on origins of lightning appeared first on Physics World.

]]>
Research update Electric fields near Earth’s surface are stronger than expected https://physicsworld.com/wp-content/uploads/2024/10/16-10-2024-Aragats_Cosmic_Ray_Research_Station.jpg newsletter
Spiders use physics, not chemistry, to cut silk in their webs https://physicsworld.com/a/spiders-use-physics-not-chemistry-to-cut-silk-in-their-webs/ Wed, 16 Oct 2024 14:02:56 +0000 https://physicsworld.com/?p=117444 New work resolves a longstanding debate and could aid the development of new cutting tools

The post Spiders use physics, not chemistry, to cut silk in their webs appeared first on Physics World.

]]>
Spider silk is among the toughest of all biological materials, and scientists have long been puzzled by how spiders manage to cut it. Do they break it down by chemical means, using enzymes? Or do they do it mechanically, using their fangs? Researchers at the University of Trento in Italy have now come down firmly on the side of fangs, resolving a longstanding debate and perhaps also advancing the development of spider-fang-inspired cutting tools.

For spiders – especially those that spin webs – the ability to cut silk lines quickly and efficiently is a crucial skill. Previously, the main theory of how they do it involved enzymes that they produce in their mouths, and that can break silk down. This mechanism, however, cannot explain how spiders cut silk so quickly. Mechanical cutting is faster, but spiders’ fangs are not shaped like scissors or other common cutting tools, so this was considered less likely.

In the new work, researchers led by Nicola Pugno and Gabriele Greco studied two species of spiders (Nuctenea umbratica and Steatoda triangulosa) collected from around the campus in Trento. In one set of experiments, they allowed the spiders to interact with artificial webs made from Kevlar, a synthetic carbon-fibre material. To weave their own webs, the spiders needed to remove the Kevlar threads and replace them with silk ones. They did this by first cutting the key structural threads in the artificial webs, then spinning a silken framework in between to build up the web structure. Any discarded fibres became support for the web.

Pugno, Greco and colleagues also allowed the spiders to build webs naturally (that is, without any artificial materials present). They then removed some of the silken threads and substituted them with carbon fibre ones so they could study how the spiders cut them.

Revealing images

One of the researchers’ first observations was that the spiders found it harder to cut fibres made from Kevlar than those made from silk. While cutting silk took them just a fraction of a second, they needed more than 10 s to cut Kevlar. This implies that much more effort was required.

A further clue came from scanning electron microscope (SEM) images of the spider-cut silk and carbon fibres. These images showed that the fracture surfaces of both were similar to those of samples that were broken with scissors or during tensile tests.

Meanwhile, images of the spider fangs themselves revealed micro-structured serrations similar to those found in animals such as crocodiles and sharks. The advantage of serrated edges is that they minimize the force required to cut a material at the point of contact – something humans have long exploited by making serrated blades that quickly cut through tough materials like wood and steel (not to mention foods like bread and steak).

In spider fangs, however, the serrations are not evenly spaced. Instead, Pugno and Greco found that the gap between them is narrowest at the tip of a fang and widest nearest the base. This, they say, suggests that when spiders want to cut a fibre, their fangs slide inwards across it until it becomes trapped in a serration of the same size. At the contact point between fibre and serration, the required cutting force is at a minimum, thereby maximizing the efficiency of cutting.

“We conducted specific experiments to prove that the fang of a spider is a ‘smart’ tool with graded serrations for cutting fibres of different dimensions naturally placed in the best place for maximizing cutting efficiency,” Pugno explains. “This makes it more efficient than a razor blade to cut these fibres,” Greco adds.

The researchers, who report their work in Advanced Science, also conducted analytical and finite-element numerical analyses to back up their observations. These revealed that when a fibre presses onto a fang, the stress on the fibre becomes concentrated thanks to the two bulges at the top of the serration. This concentration initiates the propagation of cracks through the fibre, leading to its failure, they say.

The researchers note that serration had previously been observed in 48 families of modern spiders (araneomorphs) as well as at least three families of older species (mygalomorphs). They speculate that it may have been important for functions other than cutting silk, such as chewing and mashing prey, with the araneomorphae possibly later evolving it to cut silk. But their findings are also relevant in fields other than evolutionary biology, they say.

“By explaining how spiders cut, we reveal a basic engineering principle that could inspire the design of highly efficient, sharper and more performing cutting tools that could be of interest for high-tech applications,” Pugno tells Physics World. “For example, for cutting wood, metal, stone, food or hair.”

The post Spiders use physics, not chemistry, to cut silk in their webs appeared first on Physics World.

]]>
Research update New work resolves a longstanding debate and could aid the development of new cutting tools https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_1.jpg newsletter
Around the world in 16 orbits: a day in the life of the International Space Station https://physicsworld.com/a/around-the-world-in-16-orbits-a-day-in-the-life-of-the-international-space-station/ Wed, 16 Oct 2024 10:00:27 +0000 https://physicsworld.com/?p=117110 Kate Gardner reviews the novel Orbital by Samantha Harvey

The post Around the world in 16 orbits: a day in the life of the International Space Station appeared first on Physics World.

]]>
Every day the International Space Station (ISS) orbits the Earth 16 times. Every day its occupants could (if they aren’t otherwise occupied) observe each one of our planet’s terrains and seasons. For almost a quarter of a century the ISS has been continuously inhabited by humans, a few at a time, hailing from – at the latest count – 21 countries. This impressive feat of science, engineering and international co-operation may no longer be noteworthy or news fodder, yet it still has the power to astonish and inspire.

This makes it an excellent setting for a novel that’s quietly philosophical, tackling some of the biggest questions humanity has ever asked. Orbital by British author Samantha Harvey follows four astronauts and two cosmonauts through one day on the ISS. It is an ordinary, unremarkable day and yet their location makes every moment remarkable.

We meet our characters – four men and two women, from five countries – as they are waking up during orbit 1 and leave them fast asleep in orbit 16. Harvey has clearly read astronaut accounts and studied information available from NASA and the European Space Agency. She includes as much detail about life on the ISS as a typical popular-science book on the subject.

These minutiae of astronaut tasks are interspersed with descriptions of Earth during each of the 16 orbits, as well as long passages deliberating everything from whether there is a God and climate catastrophe to global politics and the futility of trying to understand another human being.

The characters going about their tightly scheduled day in Orbital are individual people, each with their own preoccupations, past and present. While they exercise and perform maintenance tasks, science experiments and self-assessments, their thoughts roam to give us an insight that feels as true as any astronaut memoir. One character muses on the difficulty of sending messages to her loved ones, feeling that everything she has to say is either hopelessly mundane or so grandiose as to be ridiculous. I don’t know if an astronaut on the ISS has ever thought that, but for me, it perfectly encapsulates their situation.

The ISS’s orbit 400 km above Earth is close enough to see the topography and colours that pass beneath, but far enough that signs of humanity can only be inferred – at least in daylight. This doesn’t stop the characters from learning to see the traces of humans: algal blooms in oceans warmer than they once were; retreated glaciers; mountains bare of snow that were once renowned for their white caps; absent rainforest; reclaimed land covered by acres of greenhouses.

It’s a curious choice to set a book on the ISS that isn’t science fiction. It’s fiction, yes, and certainly based in the world of science, but the science it depicts isn’t futuristic or even particularly cutting-edge. The ISS is now quite old technology, nearing the end of its remarkable life, as Harvey points out in an insightful essay for LitHub. Its occupants still do experiments to further our scientific knowledge, but even there what Harvey describes is sci-fact, not sci-fi.

In her LitHub essay, Harvey says it was precisely this “slow death” of the ISS that appealed to her. The ISS is almost a time capsule, hearkening back to the end of the Cold War. It now looks likely that Russia will pull out – or be ejected – from the mission before its projected end date of 2030.

Viewed from the ISS, no borders are visible, and the crew joke comfortably about their national differences. However, their lives are nevertheless dictated by strict and sometimes petty rules governing, for example, which toilet and which exercise equipment to use. These regulations are just one more banal reality of life on the ISS, like muscle atrophy, blocked sinuses or packing up waste to go in the next resupply craft.

Just consider the real-life NASA astronauts Suni Williams and Butch Wilmore, whose stay on the ISS has been extended following problems with the Boeing craft that was supposed to bring them home in August. Having two extra people living on the space station for several months longer than planned is an intensely practical matter, made easier by such innovations as the recycling of their urine and sweat into drinking water, or that astronauts must swallow toothpaste rather than spit it out.

Harvey manages to convey that these details are quotidian. But she also imbues them with beauty. During one conversation in Orbital, a character sheds four tears. He and a crew mate then chase down each floating water droplet because loose liquids must be avoided. It’s a small moment that says so much with few words.

Orbital has been shortlisted for both the 2024 Booker Prize and the 2024 Ursula K Le Guin Prize for Fiction. The recognition reflects the book’s combination of literary prose and unusual globe-spanning (indeed, beyond global) perspective. Harvey’s writing has been compared to Virginia Woolf – a comparison that is well warranted. And yet Orbital is as accessible and educational as the best of popular science. It’s a feat almost as astonishing as the existence of the ISS.

The post Around the world in 16 orbits: a day in the life of the International Space Station appeared first on Physics World.

]]>
Opinion and reviews Kate Gardner reviews the novel Orbital by Samantha Harvey https://physicsworld.com/wp-content/uploads/2024/10/2024-10-Gardner-Virts.jpg newsletter
Semiconductor pioneer Richard Friend bags 2024 Isaac Newton Medal and Prize https://physicsworld.com/a/semiconductor-pioneer-richard-friend-bags-2024-isaac-newton-medal-and-prize/ Tue, 15 Oct 2024 14:44:55 +0000 https://physicsworld.com/?p=117420 Friend won for his work on the fundamental electronic properties of molecular semiconductors and in their engineering development

The post Semiconductor pioneer Richard Friend bags 2024 Isaac Newton Medal and Prize appeared first on Physics World.

]]>
The semiconductor physicist Richard Friend from the University of Cambridge has won the 2024 Isaac Newton Medal and Prize “for pioneering and enduring work on the fundamental electronic properties of molecular semiconductors and in their engineering development”. Presented by the Institute of Physics (IOP), which publishes Physics World, the international award is given annually for “world-leading contributions to physics”.

Friend was born in 1953 in London, UK. He completed a PhD at the University of Cambridge in 1979 under the supervision of Abe Yoffe and remained at Cambridge becoming a full professor in 1995. Friend’s research has led to a deeper understanding of the electronic properties of molecular semiconductors having in the 1980s pioneered the fabrication of thin-film molecular semiconductor devices that were later developed to support field-effect transistor circuits.

When it was discovered that semiconducting polymers could be used for light-emitting diodes (LEDs), Friend founded Cambridge Display Technology in 1992 to develop polymer LED displays. In 2000 he also co-founded Plastic Logic to advance polymer transistor circuits for e-paper displays.

As well as the 2024 Newton Medal and Prize, Friend’s other honours include the IOP’s Katherine Burr Blodgett Medal and Prize in 2009 and in 2010 he shared the Millennium Technology Prize for the development of plastic electronics. He was also knighted for services to physics in the 2003 Queen’s Birthday Honours list.

“I am immensely proud of this award and the recognition of our work,” notes Friend. “Our Cambridge group helped set the framework for the field of molecular semiconductors, showing new ways to improve how these materials can separate charges and emit light.”

Friend notes that he is “not done just yet” and is currently working on molecular semiconductors to improve the efficiency of LEDs.

Innovating and inspiring

Friend’s honour formed part of the IOP’s wider 2024 awards, which recognize everyone from early-career scientists and teachers to technicians and subject specialists.

Other winners include Laura Herz from the University of Oxford, who receives the Faraday Prize “for pioneering advances in the photophysics of next-generation semiconductors, accomplished through innovative spectroscopic experiments”. Rebecca Dewey from the University of Nottingham, meanwhile, receives the Phillips Award “for contributions to equality, diversity and inclusion in Institute of Physics activities, including promoting, updating and improving the accessibility of the I am a Physicist Girlguiding Badge, and engaging with British Sign Language users”.

In a statement, IOP president Keith Burnett congratulated all the winners, adding that they represent “some of the most innovative and inspiring” work that is happening in physics.

“Today’s world faces many challenges which physics will play an absolutely fundamental part in addressing, whether its securing the future of our economy or the transition to sustainable energy production and net zero,” adds Burnett. “Our award winners are in the vanguard of that work and each one has made a significant and positive impact in their profession. Whether as a researcher, teacher, industrialist, technician or apprentice, I hope they are incredibly proud of their achievements.”

The post Semiconductor pioneer Richard Friend bags 2024 Isaac Newton Medal and Prize appeared first on Physics World.

]]>
News Friend won for his work on the fundamental electronic properties of molecular semiconductors and in their engineering development https://physicsworld.com/wp-content/uploads/2024/10/richard_friend_iop_web.jpg newsletter1
‘Mock asteroids’ deflected by X-rays in study that could help us protect Earth https://physicsworld.com/a/mock-asteroids-deflected-by-x-rays-in-study-that-could-help-us-protect-earth/ Tue, 15 Oct 2024 14:14:26 +0000 https://physicsworld.com/?p=117412 Lab-based experiment shows how centimetre-sized objects are accelerated

The post ‘Mock asteroids’ deflected by X-rays in study that could help us protect Earth appeared first on Physics World.

]]>
For the first time, physicists in the US have done lab-based experiments that show how an asteroid could be deflected by powerful bursts of X-rays. With the help of the world’s largest high frequency electromagnetic wave generator, Nathan Moore and colleagues at Sandia National Laboratories showed how an asteroid-mimicking target could be freely suspended in space while being accelerated by ultra-short X-ray bursts.

While most asteroid impacts occur far from populated areas, they still hold the potential to cause devastation. In 2013, for example, over 1600 people were injured when a meteor exploded above the Russian city of Chelyabinsk. To better defend ourselves against these threats, planetary scientists have investigated how the paths of asteroids could be deflected before they reach Earth.

In 2022, NASA successfully demonstrated a small deflection with the DART mission, which sent a spacecraft to collide with the rocky asteroid Dimorphos at a speed of 24,000 km/h. After the impact, the period of Dimorphos’ orbit around the larger asteroid, Didymos, shortened by some 33 min.

However, this approach would not be sufficient to deflect larger objects such as the famous Chicxulub asteroid. This was roughly 10 km in diameter and triggered a mass extinction event when it impacted Earth about 66 million years ago.

Powerful X-ray burst

Fortunately, as Moore explains, there is an alternative approach to a DART-like impact. “It’s been known for decades that the only way to prevent the largest asteroids from hitting the earth is to use a powerful X-ray burst from a nuclear device,” he says. “But there has never been a safe way to test that idea. Nor would testing in space be practical.”

So far, X-ray deflection techniques have only been explored in computer simulations. But now, Moore’s team has tested a much smaller scale version of a deflection in the lab.

To generate energetic bursts of X-rays, the team used a powerful facility at Sandia National Laboratories called the Z Pulsed Power Facility – or Z Machine. Currently the largest pulsed power facility in the world, the Z Machine is essentially a giant battery that releases vast amounts of stored electrical energy in powerful, ultra-short pulses, funnelled down to a centimetre-sized target.

Few millionths of a second

In this case, the researchers used the Z Machine to compress a cylinder of argon gas into a hot, dense plasma. Afterwards, the plasma radiated X-rays in nanosecond pulses, which were fired at mock asteroid targets made from discs of fused silica. Using an optical setup behind the target, the team could measure the deflection of the targets.

“These ‘practice missions’ are miniaturized – our mock asteroids are only roughly a centimetre in size – and the flight is short-lived – just a few millionths of a second,” Moore explains. “But that’s just enough to let us test the deflection models accurately.”

Because the experiment was done here on Earth, rather than in space, the team also had to ensure that the targets were in freefall when struck by the X-rays. This was done by detaching the mock asteroid from a holder about a nanosecond before it was struck.

X-ray scissors

They achieved this by suspending the sample from a support made from thin metal foil, itself attached to a cylindrical fixture. To detach the sample, they used a technique Moore calls “X-ray scissors”, which almost instantly cut the sample away from the cylindrical fixture.

When illuminated by the X-ray burst, the supporting foil rapidly heated up and vaporized, well before the motion of the deflecting target could be affected by the fixture. For a brief moment, this left the target in freefall.

In the team’s initial experiments, the X-ray scissors worked just as they intended. Simultaneously, the X-ray pulse vaporized the target surface and deflected what remained at velocities close to 70 m/s.

The team hopes that its success will be a first step towards measuring how real asteroid materials are vaporized and deflected by more powerful X-ray bursts. This could lead to the development of a vital new line of defence against devastating asteroid impacts.

“Developing a scientific understanding of how different asteroid materials will respond is critically important for designing an intercept mission and being confident that mission would work,” Moore says. “You don’t want to take chances on the next big impact.”

The research is described in Nature Physics.

The post ‘Mock asteroids’ deflected by X-rays in study that could help us protect Earth appeared first on Physics World.

]]>
Research update Lab-based experiment shows how centimetre-sized objects are accelerated https://physicsworld.com/wp-content/uploads/2024/10/15-10-2024-Z-Machine.jpg newsletter
Quantum material detects tiny mechanical strains https://physicsworld.com/a/quantum-material-detects-tiny-mechanical-strains/ Tue, 15 Oct 2024 08:00:14 +0000 https://physicsworld.com/?p=117392 Sensitivity of vanadium-oxide-based device breaks previous record by more than an order of magnitude

The post Quantum material detects tiny mechanical strains appeared first on Physics World.

]]>
A new sensor can detect mechanical strains that are more than an order of magnitude weaker than was possible with previously reported devices. Developed at Nanjing University, China, the sensor works by detecting changes that take place in single-crystal vanadium oxide materials as they undergo a transition from a conducting to an insulating phase. The new device could have applications in electronics engineering as well as materials science.

To detect tiny deformations in materials, you ideally want a sensor that undergoes a seamless and easily measurable transition whenever a strain – even a very weak one – is applied to it. Phase transitions, such as the shift from a metal to an insulator, fit the bill because they produce a significant change in the material’s resistance, making it possible to generate large electrical signals. These signals can then be measured and used to quantify the strain that triggered them.

Traditional strain sensors, however, are based on metal and semiconductor compounds, which have resistances that don’t change much under strain. This makes it hard to detect weak strains caused by, for example, the movement of microscopic water droplets around a surface.

A research team co-led by Feng Miao and Shi-Jun Liang has now got around this problem by developing a sensor based on the bronze phase of vanadium oxide, VO2(B). The team initially chose to study this material purely to understand the mechanisms behind its temperature-induced phase transitions. Along the way, though, they noticed something unusual. “As our research progressed, we discovered that this material exhibits a unique response to strain,” Liang recalls. “This prompted us to shift the project’s focus.”

A fabrication challenge

Because the structure of vanadium oxide is not simple, fabricating a sensor from this quantum material was among the team’s biggest challenges. To make their device, the Nanjing researchers used a specially-adapted hydrogen-assisted chemical vapour deposition micro-nano fabrication process. This enabled them to produce high-quality, smooth single crystals of the material, which they characterized using a combination of electrical and spectroscopic techniques, including high-resolution transmission electron microscopy (HRTEM). They then needed to transfer this crystal from the SiO2/Si wafer on which it was grown to a flexible substrate (a smooth and insulating polyimide), which posed further experimental challenges, Liang says.

Once they had accomplished this, the researchers loaded the polyimide substrate/VO2(B) into a customized strain setup. They bonded the device to a homemade socket and induced uniaxial tensile strain in the material by vertically pushing a nanopositioner-controlled needle through it. This bends the flexible substrate and curves the upper surface of the sample.

They then measured how the current-voltage characteristics of the mechanical sensor changed as they applied strain to it. Under no strain, the channel current of the device registers 165 μA at a bias of 0.5 V, indicating that it is conducting. When the strain increases to 0.95%, however, the current drops to just 0.50 μA, suggesting a shift into an insulating state.

A strikingly large variation

The researchers also measured the response of the device to intermediate strains. As they increased the applied strain, they found that at first, the device’s resistance increased only slightly. When the uniaxial tensile strain hit a value of 0.33%, though, the resistance jumped, and afterwards it increased exponentially with applied strain. By the time they reached 0.78% strain, the resistance was more than 2600 times greater than it was in the strain-free state.

This strikingly large variation is due to a strain-induced metal-insulator transition in the single-crystal VO2(B) flake, Miao explains. “As the strain increases, the entire material transitions to an insulator, resulting in a significant increase in its resistance that we can measure,” he says. This resistance change is durable, he adds, and can be measured with the same precision even after 700 cycles, proving that the technique is reliable.

Detecting airflows and vibrations

To test their device, the Nanjing University team used it to sense the slight mechanical deformation caused by placing a micron-sized piece of plastic on it. As well as detecting the slight mechanical pressure of small objects like this, they found that the device can also monitor gentle airflows and sense tiny vibrations such as those produced when tiny water droplets (about 9 μL in volume) move on flexible substrates.

“Our work shows that quantum materials like vanadium oxide show much potential for strain detection applications,” Miao tells Physics World. “This may motivate researchers in materials science and electronic engineering to study such compounds in this context.”

This work, which is detailed in Chinese Physics Letters, was a proof-of-concept validation, Liang adds. Future studies will involve growing large-area samples and exploring how to integrate them into flexible devices. “These will allow us to make ultra-sensitive quantum material sensing chips,” he says.

The post Quantum material detects tiny mechanical strains appeared first on Physics World.

]]>
Research update Sensitivity of vanadium-oxide-based device breaks previous record by more than an order of magnitude https://physicsworld.com/wp-content/uploads/2024/10/flexible-mechanical-sensor.jpg newsletter
Electrical sutures accelerate wound healing https://physicsworld.com/a/electrical-sutures-accelerate-wound-healing/ Mon, 14 Oct 2024 08:45:01 +0000 https://physicsworld.com/?p=117383 Surgical stitches that generate electrical charge speed up the healing of muscle wounds in rats

The post Electrical sutures accelerate wound healing appeared first on Physics World.

]]>
Surgical sutures are strong, flexible fibres used to close wounds caused by trauma or surgery. But could these stitches do more than just hold wounds closed? Could they, for example, be designed to accelerate the healing process?

A research team headed up at Donghua University in Shanghai has now developed sutures that can generate electricity at the wound site. They demonstrated that the electrical stimulation produced by these sutures can speed the healing of muscle wounds in rats and reduce the risk of infection.

“Our research group has been working on fibre electronics for almost 10 years, and has developed a series of new fibre materials with electrical powering, sensing and interaction functions,” says co-project leader Chengyi Hou. “But this is our first attempt to apply fibre electronics in the biomedical field, as we believe the electricity produced by these fibres might have an effect on living organisms and influence their bioelectricity.”

The idea is that the suture will generate electricity via a triboelectric mechanism, in which movement caused by muscles contracting and relaxing generates an electric field at the wound site. The resulting electrical stimulation should accelerate wound repair by encouraging cell proliferation and migration to the affected area. It’s also essential that the suture material is biocompatible and biodegradable, eliminating the need for surgical stitch removal.

To meet these requirements, Hou and colleagues created a bioabsorbable electrical stimulation suture (BioES-suture). The BioES-suture is made from a resorbable magnesium (Mg) filament electrode, wrapped with a layer of bioabsorbable PLGA (poly(lactic-co-glycolic acid)) nanofibres, and coated with a sheath made of the biodegradable thermoplastic polycaprolactone (PCL).

Structure of the BioES-suture

After the BioES-suture is used to stitch a wound, any subsequent tissue movement results in repeated contact and separation between the PLGA and PCL layers. This generates an electric field at the wound site, the Mg electrode then harvests this electrical energy to provide stimulation and enhance wound healing.

Clinical compatibility

The researchers measured the strength of the BioES-suture, finding that it had comparable sewing strength to commercial sutures. They also tested its biocompatibility by culturing fibroblasts (cells that play a crucial role in wound healing) on Mg filaments, PLGA-coated Mg and BioES-sutures. After a week, the viability of these cells was similar to that of control cells grown in standard petri dishes.

To examine the biodegradability, the researchers immersed the BioES-suture in saline. The core (Mg electrode and nanofibre assembly) completely degraded within 14 days (the muscle recovery period). The PCL layer remained intact for up to 24 weeks, after which, no obvious BioES-suture could be seen.

Next, the researchers investigated the suture’s ability to generate electricity. They wound the BioES-suture onto an artificial muscle fibre and stretched it underwater to simulate muscle deformation. The BioES-suture’s electrical output was 7.32 V in air and 8.71 V in water, enough to light up an LCD screen.

They also monitored the BioES-suture’s power generation capacity in vivo, by stitching it into the leg muscle of rats. During normal exercise, the output voltage was about 2.3 V, showing that the BioES-suture can effectively convert natural body movements into stable electrical impulses.

Healing ability

To assess the BioES-suture’s ability to promote wound healing, the researchers first examined an in vitro wound model. Wounds receiving electrical stimulation from the BioES-suture exhibited faster migration of fibroblasts than a non-stimulated control group, as well as increased cell proliferation and expression of growth factors. The original wound area of approximately 69% was reduced to 10.8% after 24 h exposure to the BioES-sutures, compared with 32.6% for traditional sutures.

The team also assessed the material’s antibacterial capabilities by immersing a standard suture, BioES-suture and electricity-producing BioES-suture in S. aureus and E. coli cultures for 24 h. The electricity-producing BioES-suture significantly inhibited bacterial growth compared with the other two, suggesting that this electrical stimulation could provide an antimicrobial effect during wound healing.

Finally, the researchers evaluated the therapeutic effect in vivo, by using BioES-sutures to treat bleeding muscle incisions in rats. Two other groups of rats were treated with standard surgical sutures and no stitches. Electromyographic (EMG) measurements showed that the BioES-suture significantly increased EMG signal intensity, confirming its ability to generate electricity from mechanical movements.

After 10 days, they examined extracted muscle tissue from the three groups of rats. Compared with the other groups, the BioES-suture improved tissue migration from the wound bed and accelerated wound regeneration, achieving near-complete (96.5%) wound healing. Tissue staining indicated significantly enhanced secretion of key growth factors in the BioES-suture group compared with the other groups.

The researchers suggest that electrical stimulation from the BioES-suture promotes wound healing via a two-fold mechanism: the stimulation enhances the secretion of growth factors at the wound; these growth factors then promote cell migration, proliferation and deposition of extracellular matrix to accelerate wound healing.

In an infected rat wound, stitching with BioES-suture led to better healing and significantly lower bacterial count than wounds stitched with ordinary surgical sutures. The bacterial count remained low even without daily wound disinfection, indicating that the BioES-suture could potentially reduce post-operative infections.

The next step will be to test the potential of the BioES-suture in humans. The team has now started clinical trials, Hou tells Physics World.

The BioES-suture is described in Nature Communications.

The post Electrical sutures accelerate wound healing appeared first on Physics World.

]]>
Research update Surgical stitches that generate electrical charge speed up the healing of muscle wounds in rats https://physicsworld.com/wp-content/uploads/2024/10/14-10-24-electrical-suture.jpg newsletter
Top-cited authors from China discuss the importance of citation metrics https://physicsworld.com/a/top-cited-authors-from-china-discuss-the-importance-of-citation-metrics/ Fri, 11 Oct 2024 09:26:06 +0000 https://physicsworld.com/?p=117193 More than 90 papers from China have been recognized with a top-cited paper award for 2024 from IOP Publishing

The post Top-cited authors from China discuss the importance of citation metrics appeared first on Physics World.

]]>
More than 90 papers from China have been recognized with a top-cited paper award for 2024 from IOP Publishing, which publishes Physics World. The prize is given to corresponding authors who have papers published in both IOP Publishing and its partners’ journals from 2021 to 2023 that are in the top 1% of the most cited papers.

Among them are quantum physicist Xin Wang from Xi’an Jiaotong University and environmental scientist Huijuan Cui from the Institute of Geographic Sciences and Natural Resources Research.

Cui, who carries out research into climate change, says that China’s carbon neutrality goal has attracted attention all over the world, which may be a reason why the paper, published in Environmental Research Letters, garnered so many citations. “As the Chinese government pays more attention on sustainability issues like climate change…we see growing activities and influence from Chinese researchers,” she says.

A similar impact can be seen in Wang’s work in “chiral quantum networks”, which is published in Quantum Science and Technology, and is equally seen as an area that is quickly gaining traction.

Citations have an important role in Chinese research, and they can also highlight a research topic’s growing impact. “They indicate that what we are studying is a mainstream research field,” Wang says. “Our peers agree with our results and judgement of the field’s future.” Cui, meanwhile, says that citations reflect a “a positive acceptance and recognition of the quality of the research”.

Wang, however, notes that citations and impact doesn’t necessarily happen overnight and that researchers must not base their work’s impact on instantly generating citations.

He adds that some pioneering papers are not well-cited initially with researchers only beginning to realize their value after several years. “If we are confident that our findings are important, we should not be upset with its bad citation but keep on working,” he says. “It is the role of the researcher to stick with their gut to uncover their key research questions. Citations will come afterwards.”

Language barriers

When it comes to Chinese researchers getting their research cited internationally, Wang says that the language barrier is one of the greatest challenges. “The readability of a paper has a close relation with its citation,” adds Wang. “Most highly cited papers not only have an insight into scientific problems, but also are well-written.”

He adds that non-native speakers tend to avoid using “snappy” expressions, which often leads to a conservative and uninspiring tone. “These expressions are grammatically correct but awkward to native speakers,” Wang states.

Despite the potential difficulties with slow citations and language barriers, Cui says that success can be achieved through determination and focussing on important research questions. “Constant effort yields success,” adds Cui. “Keep digging into interesting questions and keep writing high-quality papers.”

That view is backed by Wang. “If your research is well-cited, congratulations,” adds Wang. “However, please do not be upset with a paper with few citations – it still might be pioneering work in its field.”

  • For the full list of top-cited papers from China for 2024, see here. Xin Wang’s and Huijuan Cui’s award-winning research can be read here and here, respectively

The post Top-cited authors from China discuss the importance of citation metrics appeared first on Physics World.

]]>
Blog More than 90 papers from China have been recognized with a top-cited paper award for 2024 from IOP Publishing https://physicsworld.com/wp-content/uploads/2024/10/2024-10-sponsored-headshots.jpg newsletter
MRI-linac keeps track of brain tumour changes during radiotherapy https://physicsworld.com/a/mri-linac-keeps-track-of-brain-tumour-changes-during-radiotherapy/ Thu, 10 Oct 2024 15:00:32 +0000 https://physicsworld.com/?p=117347 Daily MR imaging could enable treatment adaptation to glioblastoma growth or shrinkage during radiotherapy

The post MRI-linac keeps track of brain tumour changes during radiotherapy appeared first on Physics World.

]]>
Glioblastoma, the most common primary brain cancer, is treated with surgical resection where possible followed by chemoradiotherapy. Researchers at the University of Miami’s Sylvester Comprehensive Cancer Center have now demonstrated that delivering the radiotherapy on an MRI-linac could provide an early warning of tumour growth, potentially enabling rapid adaptation during the course of treatment.

The Sylvester Comprehensive Cancer Center has been treating glioblastoma patients with MRI-guided radiotherapy since 2017. While standard clinical practice employs MRI scans before and after treatment (roughly three months apart) to monitor a patient’s response, the MRI-linac enables daily imaging. The research team, led by radiation oncologist Eric Mellon, proposed that such daily scans could reveal any changes in the tumour volume or resection cavity far earlier than the standard approach.

To investigate this idea, Mellon and colleagues studied 36 patients with glioblastoma undergoing chemoradiotherapy on a 0.35 T MRI-linac. During 30 radiotherapy fractions, delivered over six weeks, they imaged patients daily on the MRI-linac to assess the volumes of lesions and surgical resection cavities (the site where the tumour was removed).

The researchers then compared the non-contrast MRI-linac images to images recorded pre- (one week before) and post- (one month after) treatment using a standalone 3T MRI with gadolinium contrast. Detailing their findings in the International Journal of Radiation Oncology – Biology – Physics, they report that in general, lesion and cavity volumes seen on non-contrast MRI-linac scans correlated strongly with volumes measured using standalone contrast MRI.

Of the patients in this study, eight had a cavity in the brain, 12 had a lesion and 16 had both cavity and lesion. From pre- to post-radiotherapy, 18 patients exhibited lesion growth, while 11 had cavity shrinkage. In 74% of the cases, changes in lesion volume (growth, shrinkage or no change) assessed on the MR-linac matched those seen on contrast MRI.

“If MRI-linac lesion growth did occur, which was in 60% of our patients [with lesions], there is a 57% chance that it will correspond with tumour growth on standalone post-contrast imaging,” said first author Kaylie Cullison, who shared the study findings at the recent ASTRO Annual Meeting.

In the other 26% of cases, contrast MRI suggested lesion shrinkage while the MRI-linac scans showed lesion growth. Cullison suggested that this may be partly due to radiation-induced oedema, which is difficult to distinguish from tumour on the non-contrast MRI-linac images.

The significant anatomic changes seen during daily imaging of glioblastoma patients suggest that adaptation could play an important role in improving their treatment. In cases where lesions or surgical resection cavities shrink, for example, treatment margins could be reduced to spare normal brain tissue from irradiation. Conversely, for patients with growing lesions, radiotherapy margins could be expanded to ensure complete tumour coverage.

Importantly, there were no cases in this study where patients showed a decrease in their MRI-linac lesion volumes and an increase in their standalone MRI volumes from pre- to post-treatment. In other words, the MR-linac did not miss any cases of true tumour growth. “You can use the MRI-linac non-contrast imaging as an early warning system for potential tumour growth,” said Cullison.

Based on their findings, the researchers propose an adaptive workflow for glioblastoma radiotherapy. For resection cavities, which are clearly visible on non-contrast MRI-linac images, adaptation to shrinkage seen on weekly (standalone or MRI-linac) non-contrast MR images is feasible. Alongside, if an MRI-linac scan shows lesion progression during treatment, gadolinium contrast could be administered (for standalone MRI or MRI-linac scans) to confirm this growth and define adaptive target volumes.

An additional advantage of this workflow is it reduces the use of contrast. Glioblastoma evolution is typically evaluated using contrast-enhanced MRI. However, potential gadolinium deposition with repeated contrast scans is a concern among patients, and the US Food & Drug Administration advises that gadolinium contrast studies should be minimized where possible. This new adaptive approach meets this requirement by only requiring contrast when non-contrast MRI shows an increase in lesion size.

Cullison tells Physics World that the team will next conduct an adaptive radiation therapy trial using the proposed workflow, to determine whether it improves patient outcomes. “We also plan further exploration and analysis of our data, including multiparametric MRI from the MRI-linac, in a larger patient cohort to try to predict patient outcomes (tumour growth; true progression versus pseudo-progression; survival times, etc) earlier than current methods allow,” she explains.

The post MRI-linac keeps track of brain tumour changes during radiotherapy appeared first on Physics World.

]]>
Research update Daily MR imaging could enable treatment adaptation to glioblastoma growth or shrinkage during radiotherapy https://physicsworld.com/wp-content/uploads/2024/10/10-10-24-Cullison-Mellon.jpg
Unlocking the future of materials science with magnetic microscopy https://physicsworld.com/a/unlocking-the-future-of-materials-science-with-magnetic-microscopy/ Thu, 10 Oct 2024 14:00:12 +0000 https://physicsworld.com/?p=117198 JPhys Materials explore some of the key magnetic imaging technologies for the upcoming decade

The post Unlocking the future of materials science with magnetic microscopy appeared first on Physics World.

]]>

With a rapidly growing interest in magnetic materials for unconventional computing, data storage, and sensor applications, active research is needed not only on material synthesis but also characterization of their properties. In addition to structural and integral magnetic characterizations, imaging of magnetization patterns, current distributions and magnetic fields at nano- and microscale is of major importance to understand the material responses and qualify them for specific applications.

In this webinar, four experts will present on some of the key magnetic imaging technologies for the upcoming decade:

  • Scanning SQUID microscopy
  • Nanoscale magnetic resonance imaging
  • Coherent X-ray magnetic imaging
  • Scanning electron microscopy with polarization analysis

The webinar will run for two hours, with time for audience Q&A after each speaker.

Those interested in exploring this topic further are encouraged to read the 2024 roadmap on magnetic microscopy techniques and their applications in materials science, a single access point of information for experts in the field as well as the young generation of students, available open access in Journal of Physics: Materials.

Katja Nowack received her PhD in physics at Delft University of Technology in 2009, focussing on controlling and reading out the spin of single electrons in electrostatically defined quantum dots for spin-based quantum information processing. During her postdoc at Stanford University, she shifted to low-temperature magnetic imaging using scanning superconducting quantum interference devices (SQUIDs). In 2015, she joined the Department of Physics at Cornell University, where her lab develops magnetic imaging techniques to study quantum materials and devices, including topological material, unconventional superconductors and superconducting circuits.

Christian Degen joined ETH Zurich in 2011 after positions at MIT, Leiden University and IBM Research, Almaden. His background includes a PhD in magnetic resonance (Beat Meier) and postdoctoral training in scanning force microscopy (Dan Rugar). Since 2009, he has led a research group on quantum sensing and nanomechanics. He is a co-founder of the microscopy start-up QZabre.

Claire Donnelly. Following her MPhys at the University of Oxford, Claire went to Switzerland to carry out her PhD studies at the Paul Scherrer Institute and ETH Zurich. She was awarded her PhD in 2017 for her work on 3D systems, in which she developed X-ray magnetic tomography, work that was recognized by a number of awards. After a postdoc at the ETH Zurich, she moved to the University of Cambridge and the Cavendish Laboratory as a Leverhulme Early Career Research Fellow, where she focused on the behaviour of three-dimensional magnetic nanostructures. Since September 2021 she is a Lise Meitner Group Leader of Spin3D at the Max Planck Institute for Chemical Physics of Solids in Dresden, Germany. Her group focuses on the physics of three-dimensional magnetic and superconducting systems, and developing synchrotron X-ray-based methods to resolve their structure in 3D.

Mathias Kläui is professor of physics at Johannes Gutenberg-University Mainz and adjunct professor at the Norwegian University of Science and Technology. He received his PhD at the University of Cambridge, after which he joined the IBM Research Labs in Zürich. He was a junior group leader at the University of Konstanz and then became associate professor in a joint appointment between the EPFL and the PSI in Switzerland before moving to Mainz. He has published more than 400 articles and given more than 250 invited talks, is a Fellow of the IEEE, IOP and APS and has been awarded a number of prizes and scholarships.

About this journal

JPhys Materials is a new open access journal highlighting the most significant and exciting advances in materials science.

Editor-in-chief: Stephan Roche is ICREA professor at the Catalan Institute of Nanosciences and Nanotechnology (ICN2) and the Barcelona Institute of Science and Technology.

 

The post Unlocking the future of materials science with magnetic microscopy appeared first on Physics World.

]]>
Webinar JPhys Materials explore some of the key magnetic imaging technologies for the upcoming decade https://physicsworld.com/wp-content/uploads/2024/10/iStock-873546438_800.jpg
Deep connections: why two AI pioneers won the Nobel Prize for Physics https://physicsworld.com/a/deep-connections-why-two-ai-pioneers-won-the-nobel-prize-for-physics/ Thu, 10 Oct 2024 13:00:22 +0000 https://physicsworld.com/?p=116910 Our podcast guest is Anil Ananthaswamy, author of Why Machines Learn

The post Deep connections: why two AI pioneers won the Nobel Prize for Physics appeared first on Physics World.

]]>
It came as a bolt from the blue for many Nobel watchers. This year’s Nobel Prize for Physics went to John Hopfield and Geoffrey Hinton for their “foundational discoveries and inventions that enable machine learning and artificial neural networks”.

In this podcast I explore the connections between artificial intelligence (AI) and physics with the author Anil Ananthaswamy – who has written the book Why Machines Learn: The Elegant Maths Behind Modern AI. We delve into the careers of Hinton and Hopfield and explain how they laid much of the groundwork for today’s AI systems.

We also look at why Hinton has spoken out about the dangers of AI and chat about the connection between this year’s physics and chemistry Nobel prizes.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Deep connections: why two AI pioneers won the Nobel Prize for Physics appeared first on Physics World.

]]>
Podcasts Our podcast guest is Anil Ananthaswamy, author of Why Machines Learn https://physicsworld.com/wp-content/uploads/2024/10/brain-computer-intelligence-concept-landscape-1027941874-Shutterstock_Jackie-Niam.jpg newsletter1
Aluminium oxide reveals its surface secrets https://physicsworld.com/a/aluminium-oxide-reveals-its-surface-secrets/ Thu, 10 Oct 2024 08:30:55 +0000 https://physicsworld.com/?p=117262 New non-contact atomic force microscopy images shed more light on the "enigmatic insulator" aluminium oxide

The post Aluminium oxide reveals its surface secrets appeared first on Physics World.

]]>
Determining the surface structure of an insulating material is a difficult task, but it is important for understanding its chemical and physical properties. A team of researchers in Austria has now succeeded in doing just this for the technologically important insulator aluminium oxide (Al2O3). The team’s new images – obtained using non-contact atomic force microscopy (AFM) – not only reveal the material’s surface structure but also explain why a simple cut through a crystal is not energetically favourable for the material and leads to a complex rearrangement of the surface.

Al2O3 is an excellent insulator and is routinely employed in many applications, for example as a support material for catalysts, as a chemically resistant ceramic and in electronic components. Characterizing how the surface atoms arrange themselves in this material is important for understanding, among other things, how chemical reactions occur on it.

A technique that works for all materials

Atoms in the bulk of a material arrange themselves in an ordered crystal lattice, but the situation is very different on the surface. The more insulating a material is, the more difficult it is to analyse its surface structure using conventional experimental techniques, which typically require conductivity.

Researchers led by Jan Balajka and Johanna Hütner at TU Wien have now used non-contact AFM to study the basal (0001) plane of Al2O3. This technique works – even for completely insulating materials – by scanning a sharp tip mounted on a quartz tuning fork at a distance of just 0.1 nm above a sample’s surface. The frequency of the fork varies as the tip interacts with the surface atoms and by measuring these changes, an image of the surface structure can be generated.

The problem is that while non-contact AFM can identify where the atoms are located, it cannot distinguish between the different elements making up a compound. Balajka, Hütner and colleagues overcame this problem by modifying the tip and attaching a single oxygen atom to it. The oxygen atoms on the surface of the sample being studied repel this oxygen atom, while its aluminium atoms attract it.

“Mapping the local repulsion or attraction enabled us to visualize the chemical identity of each surface atom directly,” explains Hütner. “The complex three-dimensional structure of the subsurface layers was then determined computationally with novel machine learning algorithms using the experimental images as input,” adds Balajka.

Surface restructuring

According to their analyses, which are detailed in Science, when a cut is made on the Al2O3 surface, it restructures so that the aluminium in the topmost layer is able to penetrate deeper into the material and chemically bond with the oxygen atoms therein. This reconstruction energetically stabilizes the structure, but it remains stoichiometrically the same.

“The atomic structure is a foundational attribute of any material and is reflected in its macroscopic properties,” says Balajka. “The surface structure governs any surface chemistry, such as chemical reactions in catalytic processes.”

Balajka says that the challenges the team had to overcome in this work were threefold: “The first was the strongly insulating character of the material; the second, the lack of chemical sensitivity in (conventional) scanning probe microscopy; and the third, the structural complexity of the alumina surface, which leads to a large configuration of possible structures.”

As an enigmatic insulator, alumina has posed significant challenges for experimental studies and its surface structure has evaded precise determination since 1960s, Balajka tells Physics World. Indeed, it was listed as one of the “three mysteries in surface science” in the late 1990s.

The new findings provide a fundamental piece of knowledge: the detailed surface structure of an important material, and pave the way for advancement in catalysis, materials science and many other fields, he adds. “The experimental and computational approaches we employed in this study can be applied to study other materials that have been too complex or inaccessible to conventional techniques.”

The post Aluminium oxide reveals its surface secrets appeared first on Physics World.

]]>
Research update New non-contact atomic force microscopy images shed more light on the "enigmatic insulator" aluminium oxide https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_Gruppenfoto_Aluminiumoxid_2048.jpg newsletter1
Enigmatic particle might be a molecular pentaquark https://physicsworld.com/a/enigmatic-particle-might-be-a-molecular-pentaquark/ Wed, 09 Oct 2024 13:57:04 +0000 https://physicsworld.com/?p=117332 Decay rate of exotic hadron suggests it comprises five quarks

The post Enigmatic particle might be a molecular pentaquark appeared first on Physics World.

]]>
The enigmatic Ξ(2030) particle, once thought to consist of three quarks, may actually be a molecular pentaquark – an exotic hadron comprising five quarks. That is the conclusion of Chinese physicists Cai Cheng and Jing-wen Feng at Sichuan Normal University and Yin Huang at Southwest Jiaotong University. They employed a simplified strong interaction theory to calculate the decay rate of the exotic hadron, concluding that it comprises five quarks.

This composition aligns more closely with experimental data than does the traditional three-quark model for Ξ(2030). While other pentaquarks have been identified in accelerator experiments to date, these particles are still considered exotic and are poorly understood compared to two-quark mesons and three-quark baryons. As a result, this latest work is a significant step towards understanding pentaquarks.

The Ξ(2030) is named for its mass in megaelectronvolts and was first discovered at Fermilab in 1977. At that time, the idea of exotic hadrons that did not fit into the conventional meson–baryon classification was not widely accepted. Conventionally, a meson comprises a quark and an antiquark and a baryon contains three quarks.

Deviation from three-quark model

Consequently, based on its properties, the scientific community classified the particle as a baryon, similar to protons and neutrons. However, further investigations at CERN, SLAC, and Fermilab revealed that the particle’s interaction properties deviated significantly from what the three-quark model predicted, leading scientists to question its three-quark nature.

To address this issue earlier this year, Yin Huang and colleague Hao Hei proposed that the Ξ(2030) could be a molecular pentaquark, suggesting that it consists of a meson and a baryon loosely bound together by the strong nuclear force. In the present study, Cheng, Feng, and Huang elaborated on this idea, analysing a model where the particle is composed of a K meson, which contains a strange antiquark and a light quark (either up or down), alongside a Σ baryon that comprises a strange quark and two light quarks.

To do the study, the team had to use a simplified approach to calculating strong interactions. This is because quantum chromodynamics, the comprehensive theory describing such interactions, is too complex for detailed calculations of hadronic properties. Their approach focuses on hadrons rather than the fundamental quarks and gluons that make up hadrons. They calculated the probabilities of the Ξ(2030) decaying into various strongly interacting particles, including π and K mesons, as well as Σ and Λ baryons.

“It is confirmed that this particle is a hadron molecular state, and its core is primarily composed of K and Σ components,” explains Feng. “The main decay channels are K+Σ and K+Λ, which are consistent with the experimental results. This conclusion not only deepens our understanding of the internal structure of the Ξ(2030), but also further supports the applicability of the concept of hadronic molecular state in particle physics.”

Extremely short lifetime

The Ξ(2030) particle has an extremely short lifetime of about 10-23 s , making it challenging to study experimentally. As a result, measuring its properties can be imprecise. The uncertainty surrounding these measurements means that comparisons with theoretical results are not always conclusive, indicating that further experimental work is essential to validate the team’s claims regarding the interaction between the meson and baryons that make up the Ξ(2030).

“However, experimental verification still needs time, involving multi-party cooperation and detailed planning, and may also require technological innovation or experimental equipment improvement,” said Huang.

Despite the challenges, the researchers are not pausing their theoretical investigations. They plan to delve deeper into the structure of the Ξ(2030) because the particle’s complex nature could provide valuable insights into the subatomic strong interaction, which remains poorly understood due to the intricacies of quantum chromodynamics.

“Current studies have shown that although the theoretically calculated total decay rate of Ξ(2030) is basically consistent with the experimental data, the slight difference reveals the complexity of the particle’s internal structure,” concluded Feng. “This important discovery not only reinforces the hypothesis of Ξ(2030) as a meson–baryon molecular state, but also suggests that the particle may contain additional components, such as a possible triquark configuration.”

Moreover, the very conclusion regarding the molecular pentaquark structure of Ξ(2030) warrants further scrutiny. The effective theory employed by the authors draws on data from other experiments with strongly interacting particles and includes a fitting parameter not derived from the foundational principles of quantum chromodynamics. This raises the possibility of alternative structures for Ξ(2030).

“Maybe Ξ(2030) is a molecular state, but that means explaining why K and Σ should stick together – [Cheng and colleagues] do provide an explanation but their mechanism is not validated against other observations so it is impossible to evaluate its plausibility,” said Eric Swanson at University of Pittsburgh, who was not involved in the study.

The research is described in Physical Review D.

The post Enigmatic particle might be a molecular pentaquark appeared first on Physics World.

]]>
Research update Decay rate of exotic hadron suggests it comprises five quarks https://physicsworld.com/wp-content/uploads/2024/10/CERN-pentaquark-.jpg newsletter1
Pioneers of AI-based protein-structure prediction share 2024 chemistry Nobel prize https://physicsworld.com/a/pioneers-of-ai-based-protein-structure-prediction-share-2024-chemistry-nobel-prize/ Wed, 09 Oct 2024 09:45:31 +0000 https://physicsworld.com/?p=116909 Protein designer is also honoured in this year’s award

The post Pioneers of AI-based protein-structure prediction share 2024 chemistry Nobel prize appeared first on Physics World.

]]>
The 2024 Nobel Prize for Chemistry has been awarded to David Baker, Demis Hassibis and John Jumper for their work on proteins.

Baker bagged half the prize “for computational protein design” and Hassibis and Jumper share the other half for “for protein structure prediction”.

Baker is a biochemist based at the University of Washington in Seattle. Hassibis did a PhD in cognitive neuroscience at University College London and is CEO and co-founder of UK-based Google DeepMind. Also based at Google DeepMind, Jumper studied physics at Vanderbilt University and the University of Cambridge before doing a PhD in chemistry at the University of Chicago.

Entirely new protein

In 2003 Baker was the first to create an entirely new protein from its constituent amino acids – and his research group has since created many more new proteins. Some of these molecules have found use in sensors, nanomaterials, vaccines and pharmaceuticals.

In 2020 Jumper and Hassibis created AlphaFold2, which is an artificial-intelligence model that can predict the structure of a protein based on its amino-acid sequence. A protein begins as a linear chain of amino acids that folds itself to create a complicated 3D structure.

These structures can be determined  experimentally using techniques including X-ray crystallography, electron microscopy and nuclear magnetic resonance. However this is time-consuming and expensive.

Used by millions

AlphaFold2 was trained using many different protein structures and went on to successfully predict the structures of nearly all of the 200,000 known proteins. It has been used by millions of people around the world and could boost our understanding of a wide range of biological and chemical processes including bacterial resistance to antibiotics and the decomposition of plastics.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Pioneers of AI-based protein-structure prediction share 2024 chemistry Nobel prize appeared first on Physics World.

]]>
News Protein designer is also honoured in this year’s award https://physicsworld.com/wp-content/uploads/2024/10/9-10-2024-Chemistry-laureates.jpg
Pele’s hair-raising physics: glassy gifts from a volcano goddess https://physicsworld.com/a/peles-hair-raising-physics-glassy-gifts-from-a-volcano-goddess/ Tue, 08 Oct 2024 13:00:30 +0000 https://physicsworld.com/?p=116989 Volcanic hairs and tears reveal a wealth of information about what lies within lava

The post Pele’s hair-raising physics: glassy gifts from a volcano goddess appeared first on Physics World.

]]>
A sensible crew cut, a chic bob, an outrageous mullet. You can infer a lot about a person by how they choose to style their hair. But it might surprise you to know that it is possible to learn more about some objects in the natural world from their “hair” – be it the “quantum hair” that can reveal the deepest darkest secrets of what happens within a black hole, or glassy hair that emerges from the depths of our planet, via a volcano.

In December 2017 University of Oxford volcanologist Tamsin Mather travelled to Nicaragua to visit an “old friend”: the Masaya volcano, some 20 km south of the country’s capital of Managua. Recent activity had created a small, churning lava lake in the centre of the volcano’s active crater, one whose “mesmerising” glow at night attracted a stream of enchanted tourists.

For those who could draw their eyes away from the roiling lava, however, another treat awaited: a gossamer carpet of yellow fibres strung across the downwind crater’s edge. Known to geologists as “Pele’s hair”, Mather describes these beautiful deposits as like “glistening spiders’ webs”, shiny and glass-like, looking like “fresh cut grass after some dew”.

These glassy strands, often blown along by the wind, have been found in the vicinity of volcanoes across the globe – not only Masaya, but also Mount Etna in Italy, Erta Ale in Ethiopia, and across Iceland, where they are instead dubbed nornahár, or “witches’ hair”. They have even been found produced by underwater volcanoes at depths of up to 4.5 km below sea level. However, Pele’s hair is arguably most associated with Hawaii, from whose religion (not the footballer) the deposits take their name (see box “The legend of Pele”).

Lava fountains and candy floss

Although you might hardly guess it from its fine nature, Pele’s hair has quite the violent birth. It forms when droplets of molten rock are flung into the air from lava fountains, cascades, particularly vigorous flows or even bursting gas bubbles. This material is then stretched out into long threads as the air (or, in some cases, water) quenches them into a volcanic glass. Pele’s hair can be both thicker and finer than its human counterpart, ranging from around 1 to 300 µm thick (Jour. Research US Geol. Survey 5 93). While the strands are typically around 5–15 cm in length, some have been recorded to reach a whopping 2 m long.

Microscope image of Pele's hair

Katryn Wiese – an earth scientist at the College of San Mateo in California – explains that the hairs form in the same way that glass blowers craft their wares. “Melt a silica-rich material like beach sand and as it cools down, blow air through it to elongate it and stretch it out,” she says. Key to the formation of Pele’s hair, Wiese notes, is that the molten lava does not have time to crystallize as it cools. “Pele’s hair is really no different than ash. Ash is basically small beads of microscopic glass, whereas Pele’s hair is a strung-out thin line of glass.”

Go to a funfair and you’ll see this same process at play at the candy floss stall. “Sugar is melted by a heat coil in the centre of a cotton candy machine and then the liquid melted sugar is blown outwards while the device spins,” Wiese explains, to produce “thin threads of liquid that freeze into non-crystalline sugar or glass”.

Just as there is a fine art to spinning cotton candy, so too does the formation of Pele’s hair require very specific conditions to be met. First, the lava has to cool slowly enough so it can stretch out into thin strands. Second, the lava must be sufficiently fluid, rather than being more viscous. That’s why Pele’s hair is only formed by so-called basaltic eruptions, where the magma has a relatively low silica content of around 45–52%.

The composition of the initial lava is also a factor in the colour of the hairs, which can range from a golden yellow to a dark brown. “Hawaiian glasses are classically amber coloured,” notes Wiese. She explains that basalts from Hawaii are primarily made up of silica and aluminium oxides (a mix of iron, magnesium and calcium oxides), as well as trace amounts of other elements and gases. “The gases often contribute to oxidation of the elements and can also lead to different colours in the glass – the same process as blown glass in the art world.”

The legend of Pele

the Halema‘uma‘u pit crater of the volcano Kīlauea

Both Pele’s hair and Pele’s tears take their name from the Hawaiian goddess of volcanoes and fire: Pelehonuamea, “She who shapes the sacred land”, who is believed to reside beneath the summit of the volcano Kīlauea on the Big Island – the current eruptive centre of the Hawaiian hotspot.

Many ancient legends of Pele depict the deity as having a fiery personality. According to one account, it was this temperament that brought her to Hawaii in the first place, having been born on the island of Tahiti. As the story goes, Pele seduced the husband of her sister Nāmaka, the water goddess. This led to a fight between the siblings that proved the final straw for their father, who sent Pele into exile.

Accepting a great canoe from her brother, the king of the sharks, Pele voyaged across the seas – trying to light her fires on every island she reached – pursued by the vengeful Nāmaka. Mirroring how the Hawaiian islands were erupted in sequence as the Earth’s crust moved relative to the underlying hotspot, Pele moved along the chain repeatedly trying to dig a fiery crater in which to live, only for each to be extinguished by Nāmaka.

The pair had their final confrontation on Maui, with Nāmaka defeating Pele and tearing her apart at the hill known today as Ka Iwi o Pele – “the bones of Pele”. Her spirit, meanwhile, flew to Kīlauea, finding its eternal home in the Halema‘uma‘u pit crater.

Tears and hairs – volcanic insights

Another important factor in the formation of Pele’s hair is the velocity at which magma is “spurted” out during an eruption, according to Japanese volcanologist Daisuke Shimozuru, who was studying Pele’s hair and tears in the 1990s.

Based on experiments involving jets of ink released from a nozzle at different speeds, Shimozuru concluded that thread-like expulsions like Pele’s hair are only formed when the eruption velocity is sufficiently high (Bulletin of Volcanology 56 217). At lower speeds, the molten material is instead quenched without being stretched, forming glassy droplets, referred to as Pele’s tears, sometimes with a hair or two attached.

Two black glass beads on a person's hand

According to Kenna Rubin – a volcanologist at the University of Rhode Island – studying the shape of these black globules can shine a light on the properties of the lava that formed them. They can provide information not only about the ejection speed, but also related parameters such as the temperature, viscosity and the distance they travelled in the atmosphere before solidifying.

Furthermore, the tears can preserve tiny bubbles of volcanic gases within themselves, trapped in cavities known as “vesicles”. Analysing these gases can reveal many details of the chemical composition of the magma that released them. These can be a useful tool to shine a light on the exact nature of the hazard posed by such eruptions.

In a similar fashion, Pele’s hair can also offer valuable insights to volcanologists about the nature of the eruptions that formed them – thereby helping to inform models of the hazards that future volcanoes may pose to nearby life and property.

Window within, and to the past

“Pele’s hair and tears are a subset of the pantheon of particles ejected by a volcano when they erupt,” notes Rubin. By examining the particles that come out over time, as well as studying the geophysical activity at a volcano, such as seismicity and gas ejection, researchers “can then make inferences about the conditions that were extant in past eruptions”. In turn, she adds, “This allows us to look at old eruption deposits that we didn’t witness erupting, and infer the same kinds of conditions.”

While Pele’s hair and tears are both relatively rare volcanic products, when they do exist they can help to constrain the eruption conditions – offering a window into not only recent but also past eruptions when so-called “fossil” samples have been preserved.

A lava lake on Volcan Masaya

Alongside the composition of the glasses (and any trapped gases within such), the shape of hairs and tears can shine a light on the various forces that affected them as they were flying through the air cooling. In fact, the presence of the hair around a volcano is itself a sign that the lava is of the least viscous type, and is undergoing some form of fountaining or bubbling.

There are, of course, many other types of material or fragments of rock that get ejected into the air when volcanoes erupt. But the great thing about Pele’s hair is that, having cooled from lava to a glass, it represents the lava’s bulk composition. As Wiese notes, “We can quickly determine the composition of the lavas that are erupting from just a single sample.”

For example, Mather collected samples of Pele’s hair from Masaya during a 2001 return visit to her cherished Nicaraguan haunt, enabling Mather and her colleagues to determine the composition of the lava erupting from Masaya’s vent in terms of both major elements and lead isotopes (Journal of Atmospheric Chemistry 46 207; Atmospheric Environment 37 4453). As Mather says, “With other measurements we can think about how this composition changes with time and also compare it with the gas and particles that are dispersed in the plume.”

Pele’s curse

Drift of Pele's hair on a rock

There is an urban legend on the islands that anything native to Hawaii – whether it be sand, rock or even volcanic glass – cannot be removed without being cursed by Pele herself. Despite invoking Hawaii’s ancient volcano goddess, the myth is believed to actually be quite recent in origin. According to one account, it was dreamt up by a frustrated park ranger who were frustrated by tourists taking rocks from the island as souvenirs. Another attributes it to tour drivers, who tired of tourists bringing said rocks onto their buses, and leaving dirt behind.

Either way, the story has taken hold as if it were an ancient Hawaiian taboo, one that some take extremely seriously. Volcanologist Kenna Rubin, for one, often receives returned rocks at her office at the University of Hawaii. “Tourists and visitors find my contact details online and return the lava rocks, or Pele’s hair,” she explains. “They apologise for taking the items as they feel they have been cursed by the goddess.”

The legend of Pele’s curse may be fictitious, but the hazards presented by Pele’s hair are very real, both to the unwitting visitor to Hawaii, and also the state’s permanent residents. Like fibreglass – which the hairs closely resemble – broken slivers of the hair can gain sharp ends that easily puncture the skin (or, worse, the eye) and break into smaller pieces as people try to remove them.

Not only can an active lava lake produce enough of the hair to carpet the surrounding area, but strands are easily picked up by the wind. From Kīlauea Volcano, for example, the US Geological Survey notes that prevailing winds tend to blow much of the Pele’s hair that is produced south to the Ka‘ū Desert, where it builds up in drifts against gully walls (see photo). In fact, hairs have been known to be carried up to tens of kilometres from the originating volcanic vent – and it is not uncommon on Hawaii to find Pele’s hair snagged on trees, utility poles and the like.

Hair in the catchment

Wind-blown Pele’s hair also poses a threat to the many locals who collect rainwater for drinking. “As ash, laze [“lava haze” – a mix of glass shards and acid released when basaltic lava enters the ocean] and Pele’s hair have been found to contain various metals and are hazardous to ingest, catchment users should avoid accumulating it in their water tanks,” the Hawaii State Department of Health advises in the event of volcanic activity.

However, even though Pele’s hair has the potential to harm humans, there are some residents of Hawaii who do benefit from it – birds. Collecting the strands like the bits of straw they resemble, our avian friends have been known to use the volcanic deposits to feather their nests; in fact, one made entirely from Pele’s hair has been preserved for posterity in the collections of the Hawaii Volcanoes National Park.

Pele’s tears can also serve as a proxy for the severity of eruptions. In a study published this March, geologist Scott Moyer and environmental scientist Dork Sahagian showed that the diameter of vesicles preserved in Pele’s tears from Hawaii is related to the height of the lava fountains that formed them (Frontiers in Earth Science 12 10.3389/feart.2024.1379985). Fountain height, in turn, is constrained by the separated gas content of the source magma, which controls eruption intensity.

It’s clear that Pele’s hair and tears are far more than a beautiful natural curiosity. Thanks to the tools and techniques of geoscience, we can use them to unravel the mysteries of Earth’s hidden interior.

The post Pele’s hair-raising physics: glassy gifts from a volcano goddess appeared first on Physics World.

]]>
Feature Volcanic hairs and tears reveal a wealth of information about what lies within lava https://physicsworld.com/wp-content/uploads/2024/10/2024-10-Randall-Peles-hair-66303979-Shutterstock_MarcelClemens.jpg newsletter
John Hopfield and Geoffrey Hinton share the 2024 Nobel Prize for Physics https://physicsworld.com/a/john-hopfield-and-geoffrey-hinton-share-the-2024-nobel-prize-for-physics/ Tue, 08 Oct 2024 09:45:44 +0000 https://physicsworld.com/?p=116908 Duo win for their work on machine learning

The post John Hopfield and Geoffrey Hinton share the 2024 Nobel Prize for Physics appeared first on Physics World.

]]>
John Hopfield and Geoffrey Hinton share the 2024 Nobel Prize for Physics for their “foundational discoveries and inventions that enable machine learning and artificial neural networks”. Known to some as the “godfather of artificial intelligence (AI)”, Hinton, 76, is currently based at the University of Toronto in Canada. Hopfield, 91, is at Princeton University in the US.

Ellen Moons from Karlstad University, who chairs the Nobel Committee for Physics, said at today’s announcement in Stockholm: “This year’s laureates used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large data sets. These artificial neural networks have been used to advance research across physics topics as diverse as particle physics, materials science and astrophysics.”

Speaking on the telephone after the prize was announced, Hinton said, “I’m flabbergasted. I had no idea this would happen. I’m very surprised”. He added that machine learning and artificial intelligence will have a huge influence on society that will be comparable to the industrial revolution. However, he pointed out that there could be danger ahead because “we have no experience dealing with things that are smarter than us.”

“Two kinds of regret”

Hinton admitted that he does have some regrets about his work in the field. “There’s two kinds of regret. There’s regrets where you feel guilty because you did something you knew you shouldn’t have done. And then there are regrets where you did something that you would do again in the same circumstance but it may in the end not turn out well. That second kind of regret I have. I am worried the overall consequence of this might be systems more intelligent than us that eventually take control.”

Hinton spoke to the Nobel press conference from the West Coast of the US, where it was about 3 a.m. “I’m in a cheap hotel in California that doesn’t have a very good Internet connection. I was going to get an MRI scan today but I think I’ll have to cancel it.”

Hopfield began his career as a condensed-matter physicist before making the shift to neuroscience. In a 2014 perspective article for the journal Physical Biology called “Two cultures? Experiences at the physics–biology interface”, Hopfield wrote, “Mathematical theory had great predictive power in physics, but very little in biology. As a result, mathematics is considered the language of the physics paradigm, a language in which most biologists could remain illiterate.” Hopfield saw this as an opportunity because the physics paradigm “brings refreshing attitudes and a different choice of problems to the interface”. However, he was not without his critics in the biology community and wrote that one must have “have a thick skin”.

In the early 1980s, Hopfield developed his eponymous network, which can be used to store patterns and then retrieve them using incomplete information. This is called associative memory and an analogue in human cognition would be recalling a word when you only know the context and maybe the first letter or two.

Different types of network

A Hopfield network is  layer of neurons (or nodes) that are all connected together such that the state, 0 or 1, of each node is affected by the states of its neighbours (see above). This is similar to how magnetic materials are modelled by physicists – and a Hopfield network is reminiscent of a spin glass.

When an image is fed into the network, the strengths of the connections between nodes are adjusted and the image is stored in a low-energy state. This minimization process is essentially learning. When an imperfect version of the same image is input, it is subject to an energy-minimization process that will flip the values of some of the nodes until the two images resemble each other. What is more, several images can be stored in a Hopfield network, which can usually differentiate between all of them. Later networks used nodes that could take on more than two values, allowing more complex images to be stored and retrieved. As the networks improved, evermore subtle differences between images could be detected.

A little later on in the 1980s, Hinton was exploring how algorithms could be used to process patterns in the same way as the human brain. Using a simple Hopfield network as a starting point, he and a colleague borrowed ideas from statistical physics to develop a Boltzmann machine. It is so named because it works in analogy to the Boltzmann equation, which says that some states are more probable than others based on the energy of a system.

A Boltzmann machine typically has two connected layers of nodes – a visible layer that is the interface for inputting and outputting information, and a hidden layer. A Boltzmann machine can be generative – if it is trained on a set of similar images, it can produce a new and original image that is similar. The machine can also learn to categorise images. It was realized that the performance of a Boltzmann machine could be enhanced by eliminating connections between some nodes, creating “restricted Boltzmann machines”.

Hopfield networks and Boltzmann machines laid the foundations for the development of later machine learning and artificial-intelligence technologies – some of which we use today.

A life in science

Diagram showing the brain’s neural network and an artificial neural network

Born on 6 December 1947 in London, UK, Hinton graduated with a degree in experimental psychology in 1970 from Cambridge University before doing a PhD on AI at the University of Edinburgh, which he completed in 1975. After a spell at the University of Sussex, Hinton moved to the University of California, San Diego, in 1978, before going toCarnegie-Mellon University in 1982 and Toronto in 1987.

After becoming a founding director of the Gatsby Computational Neuroscience Unit at University College London in 1998, Hinton returned to Toronto in 2001 where he has remained since. From 2014, Hinton divided his time between Toronto and Google but then resigned from Google in 2023 “to freely speak out about the risks of AI.”

Elected as a  Fellow of the Royal Society in 1998, Hinton has won many other awards including the inaugural David E Rumelhart Prize in 2001 for the application of the backpropagation algorithm and Boltzmann machines. He also won the Royal Society’s James Clerk Maxwell Medal in 2016 and the Turing Award from the Association for Computing Machinery in 2018.

Hopfield was born on 15 July 1933 in Chicago, Illinois. After receiving a degree in 1954 from Swarthmore College in 1958 he completed a PhD in physics at Cornell University. Hopfield then spent two years at Bell Labs before moving to the University of California, Berkeley, in 1961.

In 1964 Hopfield went to Princeton University and then in 1980 moved to the California Institute of Technology. He returned to Princeton in 1997 where he remained for the rest of his career.

As well as the Nobel prize, Hopfield won the 2001 Dirac Medal and Prize from the International Center for Theoretical Physics as well the Albert Einstein World Award of Science in 2005. He also served as president of the American Physical Society in 2006.

  • Two papers written by this year’s physics laureates in journals published by IOP Publishing, which publishes Physics World, can be read here.
  • The Institute of Physics, which publishes Physics World, is running a survey gauging the views of the physics community on AI and physics till the end of this month. Click here to take part.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post John Hopfield and Geoffrey Hinton share the 2024 Nobel Prize for Physics appeared first on Physics World.

]]>
News Duo win for their work on machine learning https://physicsworld.com/wp-content/uploads/2024/10/00001-NOBEL-Physics-2024-new.jpg
Roger Penrose: the Nobel laureate with a preference for transparencies over slideshows https://physicsworld.com/a/roger-penrose-the-nobel-laureate-with-a-preference-for-transparencies-over-slideshows/ Tue, 08 Oct 2024 09:34:26 +0000 https://physicsworld.com/?p=117289 Tushna Commissariat recounts a fascinating chat with Roger Penrose

The post Roger Penrose: the Nobel laureate with a preference for transparencies over slideshows appeared first on Physics World.

]]>

 

As a young physics student, I spent the summer of 2004 toting around Roger Penrose’s The Road to Reality: A Complete Guide to the Laws of the Universe. It was one of the most challenging popular-science books I had ever come across, and I, like many others, was intrigued by Penrose’s treatise and his particular ideas about our cosmos. So I must admit that nearly a decade later, when I had the opportunity to meet the man himself at a 2015 conference hosted by Queen Mary University London, I was still somewhat starstruck.

The conference in question was one celebrating “Einstein’s Legacy: Celebrating 100 years of General Relativity”, and included scientists, writers and journalists who gave talks on everything from the “physiology of GR” to light cones and black holes. Penrose was one of the plenary speakers on a Saturday evening and I was promptly amused when he began his talk on “Light cones, black holes, infinity and beyond”, with a rather beautiful if extremely old-school transparency. Those who had attended his talks before (and indeed even to this day) already knew of this particular habit, as Penrose famously dislikes slides and prefers to give his talks with his own hand-drawn colourful sketches – in fact, I’ve never seen quite such a colourful black hole! In my blog from 2015, I described the talk as “equal parts complex, intriguing and amusing”, and I recall thoroughly enjoying it.

As any good science journalist would, I attempted to speak with him after the talk, but he was absolutely mobbed by the many students and other enthusiastic scientists at the event. So I decided to bide my time and attempt to catch him at the dinner after, where he again held court with all the QMUL students who hung on to his every word. It was only after 10 p.m. that I managed to get him alone to interview him. My colleague and I set up a camera in a quiet classroom and as we asked Penrose our first question on cosmology, a deep rumbling sound took over the room – a District and Hammersmith tube line runs past most of the classrooms at the campus.

We spent most of the interview stopping and starting and attempting to perfectly time when the next tube would rumble past. Penrose was extremely patient despite how late it was, and the fact that he had been talking for hours already. The many interruptions to filming did mean that we had the chance to chat casually with him, and though I cannot recall the exact details, the conversation was equal parts fascinating and rambling, as we went off on many tangents.

You can watch the final version of my interview with Penrose above, to learn more about who inspired him, his views on the future of cosmology, and how his career-long interest in back holes – which won him the 2020 Nobel prize – first began.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Roger Penrose: the Nobel laureate with a preference for transparencies over slideshows appeared first on Physics World.

]]>
Blog Tushna Commissariat recounts a fascinating chat with Roger Penrose https://physicsworld.com/wp-content/uploads/2020/10/Penrose-home-pic.jpg
Laureates on film: Nobel winners who have graced our silver screen https://physicsworld.com/a/laureates-on-film-nobel-winners-who-have-graced-our-silver-screen/ Tue, 08 Oct 2024 09:00:38 +0000 https://physicsworld.com/?p=116907 Chatting with Frank Wilczek and Albert Fert

The post Laureates on film: Nobel winners who have graced our silver screen appeared first on Physics World.

]]>

One of the benefits of being on Physics World is that you get to meet some of the world’s best and brightest physicists – some of whom are Nobel laureates and some who could very well be among this year’s winners.

For years I attended the March Meeting of the American Physical Society – a gathering of upwards of 10,000 physicists where you are sure to bump into a Nobel laureate or two. At the 2011 meeting in Dallas I had the pleasure of interviewing MIT’s Frank Wilczek, who shared the 2004 prize with David Gross and David Politzer for their “for the discovery of asymptotic freedom in the theory of the strong interaction”.

But instead of looking back on his work on quarks and gluons, Wilczek was keen to chat about the physics of superconductivity and it’s wide-reaching influence on theoretical physics. You can watch a video of that interview above or here: “Superconductivity: a far-reaching theory”.

Amusing innuendo

Wilczek was a lovely guy and I was really pleased four years later when he recognized me at the Royal Society in London. We were both admiring portraits of fellows, and the amusing innuendo found in one of the picture captions. On a more serious note, we were both there for a celebration of Maxwell’s equations and you can read more about the event here: “A great day out in celebration of Maxwell’s equations”.

Also at that event in London were John Pendry of nearby Imperial College London and Harvard University’s Federico Capasso – who are both on our list of people who could win this year’s Nobel prize. Pendry is a pioneer in the mathematics that describes how metamaterials can be used to manipulate light in weird and wonderful ways – and Capasso has spent much of his career making such metamaterials in the lab, and commercially.

The Royal Society was also where I recorded a video interview with Albert Fert, who shared the 2007 prize with Peter Grünberg for their work on giant magnetoresistance (watch below). A decade or so earlier, I had completed a PhD on ultrathin magnetic materials, so I was very happy to hear that two pioneers of the field had been honoured.

In the interview, Fert looks to the future of spintronics. This is an emerging field in which the magnetic spin of materials is used to store and transport information – potentially using much less energy than conventional electronics.

I recorded a second video interview that day with David Awschalom, now at the University of Chicago. He is a pioneer in spintronics and much of his work is now focused on using spins for quantum computing. Another potential Nobel laureate perhaps?

We don’t do video interviews anymore – instead we chat with people on our podcasts. As you can see from our videos, I really struggled with the medium. The laureates, however, were real pros!

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Laureates on film: Nobel winners who have graced our silver screen appeared first on Physics World.

]]>
Blog Chatting with Frank Wilczek and Albert Fert https://physicsworld.com/wp-content/uploads/2024/10/Albert-Fert.jpg
How to rotate your mattress like a physics Nobel prizewinner https://physicsworld.com/a/how-to-rotate-your-mattress-like-a-physics-nobel-prizewinner/ Mon, 07 Oct 2024 17:00:37 +0000 https://physicsworld.com/?p=117281 A tongue-in-cheek e-mail exchange with 1973 Nobel Prize winner Brian Josephson shows that for some laureates, scientific rigour extends to ordinary life, too

The post How to rotate your mattress like a physics Nobel prizewinner appeared first on Physics World.

]]>
Amid the hype of Nobel Prize week, it’s important to remember that in many respects, Nobel laureates are just like the rest of us. They wake up and get dressed. They eat. They go about their daily lives. And when it’s time for bed, they lie down on mattresses that have been rotated scientifically, according to the principles of symmetry group theory.

Well, Brian Josephson does, anyway.

In the early 1960s, Josephson – then a PhD student in theoretical condensed matter physics at the University of Cambridge, UK – predicted that a superconducting current should be able to tunnel through an insulating junction even when there is no voltage across it. He also predicted that if a voltage is applied, the current will oscillate at a well-defined frequency. These predictions were soon verified experimentally, and in 1973 he received a half share of the Nobel Prize for Physics (Ivar Giaever and Leo Esaki, who did experimental research on quantum tunnelling in superconductors and semiconductors respectively, got the other half).

Subsequent work has borne out the importance of Josephson’s discovery. “Josephson junctions” are integral to instruments called SQUIDs (superconducting quantum interference devices) that measure magnetic fields with exquisite sensitivity. More recently, they’ve become the foundation for superconducting qubits, which drive many of today’s quantum computers.

Josephson himself, however, lost interest in theoretical condensed-matter physics. Instead, he has devoted most of his post-PhD career to the physics of consciousness, researching topics such as telepathy and psychokinesis under the auspices of the Mind-Matter Unification Project he founded.

An unusual scientific paper

Josephson’s later work hasn’t attracted much support from his fellow physicists. Still, he remains an active member of the community and, incidentally, a semi-regular contributor to Physics World’s postbag. It was in this context that I learned of his work on the pressing domestic dilemma of mattress rotation.

In December 2014, Josephson responded to a call for submissions to Physics World’s Lateral Thoughts column of humorous essays with a brief but tantalizing message. “What a pity my ‘Group Theory and the Art of Mattress Turning’ is too short for this,” he wrote. This document, Josephson explained, describes “the order-4 symmetry group of a mattress, and how an alternating sequence of the two easiest non-trivial group operations…takes you in sequence through all four mattress orientations, thereby preserving as much as possible the symmetry of the mattress under perturbations by sleepers [and] enhancing its lifetime.”

At the time, I had only recently purchased my first mattress, and I was keen to avoid shelling out for another any time soon. I therefore asked for more details. Within days, Josephson kindly replied with a copy of his mock paper, in the form of a scanned “cribsheet” which, he explained, lives under the mattress in the home he shares with his wife.

An argument from symmetry

Like all good scientific papers, Josephson’s “Group Theory and the Art of Mattress Turning” begins with a summary of the problem. “A mattress may be laid down on a bed in four different orientations,” it states. “For maximum life it should be cycled regularly through these four orientations. How may a mattress user ensure that this be done?”

The paper goes on to observe that the symmetry group of a mattress (that is, the collection of all transformations under which it is mathematically invariant) contains four elements. The first element is the identity transformation, which leaves the mattress’ orientation unchanged. The other three elements are rotations about the mattress’ axes of symmetry. Listed “in order of increasing physical effort required to perform”, these rotations are:

  • V, rotation by π (180 degrees) about a vertical axis (that is, keeping the mattress flat and spinning it around so that the erstwhile head area is at the feet)
  • L, rotation by π about the longer axis of symmetry (that is, flipping the mattress from the side of the bed, such that the head and foot ends remain in the same position relative to the bed, but the mattress is now upside down)
  • S, rotation by π about the shorter axis of symmetry (that is, flipping the mattress from the end of the bed, such that the head and foot ends swap places while the mattress is simultaneously turned upside down)

“Ideally, S should be avoided in order to minimize effort”, the paper continues. Fortunately, there is a solution: “It is easily seen that alternate applications of V and L will cause the mattress to go through all ‘proper’ orientations relative to the bed, in a cycle of order 4. The following algorithm will achieve this in practice: Odd months, rotate about the lOng axis. eVen months, rotate about the Vertical axis.” In case this isn’t memorable enough, the paper advises that “potential users of this algorithm may find it helpful to write it down on a piece of paper which should be slipped under the mattress for retrieval later when it may have been forgotten”.

A challenging problem

The paper concludes, as per convention, with an acknowledgement section and a list of references. In the former, Josephson thanks “cj” – presumably his wife, Carol – “for bringing this challenging problem to my attention”. The latter contains a single citation, to a “Theory of Compliant Mattress Group lecture notes on applications of group theory” supposedly produced by Josephson’s office-mate Volker Heine.

The most endearing part of the paper, though, is the area below the references in the scanned cribsheet. This contains extensive handwritten notes on months and rotations, strongly suggesting that Josephson does, in fact, rotate his mattress according to the above-outlined principles. Indeed, in a postscript to his e-mail, Josephson noted that he and Carol recently had to modify the algorithm in response to a change in experimental conditions, namely the purchase of “a very flexible foam mattress”. This, he observed, “makes S rotations easier than L rotations, so we use that instead”.

I wish I could say that I adopted this method of mattress rotation in my own domestic life. Alas, my housekeeping is not up to Nobel laureate standards: I rotate my mattress approximately once a season, not once a month as the algorithm requires. However, whenever I do get round to it, I always think of Brian Josephson, the unconventional Nobel laureate whose tongue-in-cheek determination to apply physics to his daily life never fails to make me smile.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

 

The post How to rotate your mattress like a physics Nobel prizewinner appeared first on Physics World.

]]>
Blog A tongue-in-cheek e-mail exchange with 1973 Nobel Prize winner Brian Josephson shows that for some laureates, scientific rigour extends to ordinary life, too https://physicsworld.com/wp-content/uploads/2024/10/Mattress.jpg
European Space Agency launches Hera mission to investigate asteroid ‘crash-scene’ https://physicsworld.com/a/european-space-agency-launches-hera-mission-to-investigate-asteroid-crash-scene/ Mon, 07 Oct 2024 14:53:21 +0000 https://physicsworld.com/?p=117278 Hera will perform a close-up examination of a 2022 impact on Dimorphos by NASA's DART mission

The post European Space Agency launches Hera mission to investigate asteroid ‘crash-scene’ appeared first on Physics World.

]]>
The European Space Agency (ESA) has launched a €360m mission to perform a close-up “crash-scene” investigation of the 150 m-diameter asteroid Dimorphos, which was purposely hit by a NASA probe in 2022. Hera took off aboard a SpaceX Falcon 9 rocket from Cape Canaveral at 10:52 local time. The mission should reach the asteroid in December 2026.

On 26 September 2022, NASA confirmed that its $330m Double Asteroid Redirection Test (DART) mission successfully demonstrated “kinetic impact” by hitting Dimorphos at a speed of 6.1 km/s. This resulted in the asteroid being put on a slightly different orbit around its companion body – a 780 m-diameter asteroid called “Didymos”.

A month later in October, NASA confirmed that DART had altered Dimorphos’ orbit by 33 minutes, shortening the 11 hour and 55-minute orbit to 11 hours and 23 minutes. This was some 25 times greater than the 73 seconds NASA had defined as a minimum successful orbit period change. Much of the momentum change came from the ejecta liberated by the impact including a plume of debris that extended more than 10 000 km into space.

Mars flyby

The Hera mission, which has 12 instruments including cameras and thermal-infrared imagers, will perform a detailed post-impact survey of Dimorphos. This will involve measuring its size, shape mass and orbit more precisely than has been carried out to date by follow-up measurements from ground- and space-based observatories including the Hubble Space Telescope.

It is hoped that Hera will be able to reach up to 200 m from the surface of Dimorphos to deliver 2 cm imaging resolution in certain sections.

Part of the Hera mission involves releasing two cubesats – each the size of a shoebox – that will also have imagers and radar onboard. They will examine Dimorphos’ internal structure to determine whether it is a rubble pile or has a solid core surrounding by layers of boulders.

The cubesats will also attempt to land on the asteroid with one measuring the asteroid’s gravitational field. The cubesats are also technology demonstrators, testing communication in deep space between them and Hera.

Once Hera’s mission is complete about six months after arrival at Dimorphos, it may also attempt to land on the asteroid, although a decision to do so has not yet been made.

On its way to Dimorphos, next year Hera will carry out a “swingby” of Mars and a flyby of the Martian moon Deimos.

The post European Space Agency launches Hera mission to investigate asteroid ‘crash-scene’ appeared first on Physics World.

]]>
News Hera will perform a close-up examination of a 2022 impact on Dimorphos by NASA's DART mission https://physicsworld.com/wp-content/uploads/2024/10/Last_view_of_Hera_spacecraft-small.jpg
Use our infographic to predict this year’s Nobel prize winners https://physicsworld.com/a/use-our-infographic-to-predict-this-years-nobel-prize-winners/ Mon, 07 Oct 2024 14:00:18 +0000 https://physicsworld.com/?p=116905 We are expecting a prize in condensed-matter physics in 2024

The post Use our infographic to predict this year’s Nobel prize winners appeared first on Physics World.

]]>
PW Nobel Infographic

Part of the fun of the run-up to the announcement of the Nobel Prize for Physics is the speculation – serious, silly or otherwise – of who will be this year’s winner(s). Here at Physics World, we don’t shy away from making predictions but our track record is not particularly good.

That’s not surprising, because the process of choosing Nobel winners is highly secretive and we know nothing about who has been nominated for this year’s prize. That’s thanks to the 50-year embargo on all information related to the decision.

The 2024 prize will be announced tomorrow and if you would like to know more about how the Nobel Committee for Physics operates, check out this article that’s based on an interview with a former committee chair: “Inside the Nobels: Lars Brink reveals how the world’s top physics prize is awarded”.

Charting history

Several years ago we created an infographic that charts the history of the Nobel Prize for Physics in terms of the discipline of the winning work (see figure). For example, last year the prize was shared by Pierre Agostini, Ferenc Krausz and Anne L’Huillier for their pioneering work using attosecond laser pulses to study the behaviour of electrons. We categorized this prize as “atomic, molecular and optical” and you can see that prize at the top of the infographic, connected to its category by a darkish blue line.

As well as revealing which disciplines of physics have received the most attention from successive Nobel committees, the infographic also shows that some disciplines fall in and out of favour, while others have produced a steady stream of winners over the past 12 decades. The infographic shows, for example, the return of quantum physics to the Nobel realm. The discipline was popular with the Nobel committee in the 1910s–1950s and then fell completely out of favour until 2012.

Another thing that is apparent from the infographic is that after about 1990 there tends to be well-defined gaps between disciplines. And for no good scientific reason, we have decided that we can analyse these gaps and use the results to make predictions!

Partially correct

Last year, we noticed that atomic, molecular and optical physics was due a prize. That observation, in part, led us to predict that Paul Corkum, Ferenc Krausz and Anne L’Huillier would win in 2023. This partially correct prediction has emboldened our faith in the mystical ability of our infographic to help predict winners.

So what does that mean for our predictions for this year?

The infographic makes it clear that we are overdue a prize in condensed-matter physics. Some possibilities that we have identified include magic-angle graphene and metamaterials.

So tune into Physics World tomorrow and find out if we are right.

 

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

 

The post Use our infographic to predict this year’s Nobel prize winners appeared first on Physics World.

]]>
Blog We are expecting a prize in condensed-matter physics in 2024 https://physicsworld.com/wp-content/uploads/2024/10/PW-Nobel-Infographic-list.jpg
To boost battery recycling, keep a close eye on the data https://physicsworld.com/a/to-boost-battery-recycling-keep-a-close-eye-on-the-data/ Mon, 07 Oct 2024 10:20:09 +0000 https://physicsworld.com/?p=117140 Real-time analysis can drive improvements that benefit manufacturers as well as the environment, says Kalle Blomberg

The post To boost battery recycling, keep a close eye on the data appeared first on Physics World.

]]>
How did Sensmet get started?

The initial idea to build an online system that uses a micro-plasma to analyse metals in liquids came from Toni Laurila, who is now Sensmet’s co-founder and CEO. He got the idea during his post-doctoral studies at the University of Cambridge, UK, and after he returned to Finland, we started to develop an online instrument for industrial process and environmental applications.

Typically, if you need to measure metals in liquid – whether it’s wastewater, industrial process water or natural bodies like rivers and lakes – you collect a sample and send it to a laboratory for analysis. Depending on the lab, it might take up to several days to get the results. If you need to control a process based on such outdated data, it’s like trying to drive your car while solely relying on a rearview mirror where the image is 4‒10 hours old. By the time you see what’s happening, you’ve already veered off the road.

We saw that we can do for liquid monitoring what other companies did for online gas monitoring around 30 years ago, when the regulations started changing in a way that meant practically all gaseous emissions needed to be monitored in real time. We believe this will also be the future for liquids.

What kinds of liquids are you analysing?

Regulations on real-time monitoring of liquids are going to come at some point, and we believe that our technology will make that possible, but it has not happened yet. This means that for now, we are focusing on analysing liquids involved in industrial processes, because that is an area where we can give customers a return on their investment.

A good example is the battery industry, which is growing rapidly due to the popularity of electric cars. This is driving huge demand for lithium and other metals. If we want to produce enough electric cars to reduce emissions from petrol and diesel vehicles, we can’t do it just by mining new metals. The recycling rate for old batteries also needs to rise.

How does battery recycling work, and how do Sensmet’s analysers help?

Typically, you take the end-of-life battery from an electric car and shred it into very fine particles to create what’s known as the black mass. Separating the valuable metals from the black mass then involves a hydrometallurgical process, where the metals are converted into a liquid form, typically by dissolving or leaching them in acids. Once the valuable metals are dissolved, they are extracted from the black mass one by one through processes such as solvent extraction or ion exchange.

What makes our analyser particularly well-suited for monitoring this battery recycling process is that we can measure multiple metals simultaneously. This includes light elements such as lithium and sodium that cannot be measured using X-ray fluorescence, a commonly used technique for metals analysis.

Real-time measurement is essential for optimizing the battery metal recycling process. By continuously monitoring the concentrations of key metals such as lithium, manganese, cobalt, nickel, copper, aluminium and calcium, process operators can quickly detect anomalies, enhancing both quality and efficiency. The speed of the processes used to separate elements from the black mass is another critical factor. If you’re having to wait around for a laboratory analysis, you cannot optimize them very well. You’re not getting the rapid, real-time measurements you need to improve your yield, and that can mean increased waste.

Sensmet installation

A clear example is ion exchange columns, which require periodic regeneration as they become saturated. Our analyser monitors the solution from these columns, and when it detects a rise in, say, nickel concentration, the customer knows it’s time to regenerate the column. In these situations, the speed of analysis is crucial for optimizing the production efficiency.

What challenges did you encounter in developing your analyser?

While proving a technology’s effectiveness in the lab is relatively straightforward, developing a product that performs reliably in real-world conditions is much more challenging. Our customers require an analyser that is both robust and reliable in demanding industrial environments, consistently delivering accurate results day after day, year after year.

We also conduct environmental online monitoring of industrial wastewater, which is challenging in Finland, where winter temperatures can drop to –35 °C. To address this, we can house our analyser in a container and use heated measurement lines to transfer the liquid samples, for example from a settling pond.

These harsh conditions and customer requirements are some of the reasons we chose to use a spectrometer from Avantes in our analyser. The way Avantes builds their spectrometers, they are quite robust. If you accidentally hit them a little bit, they maintain their calibration.

What are some other advantages of Avantes spectrometers?

We bought our first spectrometer from them before we spun out of the university in 2017. It was a high-resolution system for plasma research, and it allowed us to do very fast measurements and collect multiple spectra at high speeds. After that, it was easy to choose the next spectrometer from the same manufacturer because we’d already built the programs and controls for our prototype analyser based on it. And we’ve always had very good service from Avantes. When we have faced a problem, they’ve always helped us quickly. That’s very important, especially at the university stage when we were using the spectrometer beyond its regular scope.

What do you know now that you wish you’d known when Sensmet got started?

When we started building our analyser and realized what it could do, we felt like kids in a candy shop surrounded by a million treats.  There is water everywhere, so we believed our technology had universal appeal and expected everyone to adopt it immediately.

As a start-up, focus is everything. You need to concentrate on a specific market and convince those customers that your product is the right fit for them. Only then can you expand to the next market. However, we were young with limited experience, so it took us some time to realize this.

What are you working on now?

Our first product is ready, so our focus is on pushing it to the market. We are working with multiple battery manufacturing companies and mining companies to make ourselves known as a reliable provider of analysers that can really bring significant added value to customer processes.

Kalle Blomberg is the chief technology officer at Sensmet

The post To boost battery recycling, keep a close eye on the data appeared first on Physics World.

]]>
Interview Real-time analysis can drive improvements that benefit manufacturers as well as the environment, says Kalle Blomberg https://physicsworld.com/wp-content/uploads/2024/10/web-Sensmet-environmental-monitoring.jpg newsletter
Fusion, the Web and electric planes: how spin-offs from big science are transforming the world https://physicsworld.com/a/fusion-the-web-and-electric-planes-how-spin-offs-from-big-science-are-transforming-the-world/ Mon, 07 Oct 2024 10:00:41 +0000 https://physicsworld.com/?p=116924 James McKenzie looks at some of the unexpected spin-offs from big science

The post Fusion, the Web and electric planes: how spin-offs from big science are transforming the world appeared first on Physics World.

]]>
With the CERN particle-physics lab turning 70 this year, I’ve been thinking about the impact of big science on business. There are hundreds – if not thousands – of examples I could cite, the most famous being, of course, the World Wide Web. It was devised at CERN in 1989 by the British computer scientist Tim Berners-Lee, who was seeking a way to organize and share the huge amounts of data produced by the lab’s fundamental science experiments.

While the Web wasn’t a spin-off technology as such, it’s hard to think of anything developed with one purpose in mind that’s had such far-reaching applications across the whole of business and society. Indeed, CERN can lay claim to lots of spin-off firms that have pushed the boundaries of technology. Many of those firms specialize in detectors, imaging and sensors, but quite a few are involved in materials, coatings, healthcare and environmental applications.

It would be impossible for me to discuss them all in a short article, but there are lots – and CERN is rather good these days at knowledge transfer. So too are large national labs, such as Harwell and Daresbury in the UK, which have co-ordinated spin-out and knowledge transfer activities supported by UK Research and Innovation. A recent report from the UK government claims that firms spun out from the country’s public sector had raised a total of £5.1bn of investment and created more than 7000 new jobs over the last four decades.

One particularly exciting spin-off from big science is from the burgeoning fusion industry. There are currently about 40 different companies around the world trying to develop commercial fusion-power plants that can serve as a sustainable source of electricity in our quest for net zero. Whilst the sector is making steady progress towards that goal, the associated technology could have some other rather interesting applications too.

Fusion tech

Consider Tokamak Energy, which was founded in 2009 by a group of scientists and researchers at the UK Atomic Energy Authority, making it a spin-out of sorts. The company’s main aim is to build a tokamak fusion plant that could one day deliver electricity to the grid. But over the years it’s also become rather good at making high-temperature superconducting (HTS) magnets, with more than 200 patents to its name.

The company is, for example, working with the US Department of Energy, via the Defense Advanced Research Projects Agency (DARPA), to build a magnetohydrodynamic drive (MHD). Such a device, which provides propulsion without any moving parts, conjures up visions of the great 1990s movie The Hunt for Red October, where Sean Connery played a Soviet sailor captaining a submarine that can’t be detected by sonar.

One particularly exciting spin-off from big science is from the burgeoning fusion industry

In terms of physics, an MHD drive uses electric fields to accelerate an electrically conducting fluid. A magnetic field applied perpendicularly to the flow creates a thrust – the Lorenz force – at 90° to the electric and magnetic fields, in accordance with the right-hand rule. Back in the 1990s, the Japanese firm Mitsubishi did build a ship – Yamato 1 – powered by a prototype MHD thruster, but with the technology available at the time limiting magnetic fields to just 4 T, the boat only had a top speed of 15 km/h.

Since then, however, HTS magnet technology has markedly improved. In 2019, for example, Tokamak Energy announced it had built a magnet that produced a record-breaking 24 T field at 20 K. Based on superconducting barium-copper-oxide tape technology, the magnet is designed to be used in the poloidal field coils of a tokamak fusion device. The superconducting magnets at the Joint European Torus (JET) fusion facility in the UK, in contrast, produced fields of only 4 T.

For Tokamak Energy to create such a powerful magnet was quite an achievement, and you can imagine that it could improve MHD performance and open the door to many other applications too. In fact, the company has just launched a new business division called TE Magnetics, focusing on HTS magnet technology. It wants to tap into a market that a recent report from Future Market Insights reckons was worth an astonishing $3.3bn in 2023.

Aircraft advances

David Kingham, co-founder and executive vice-chair of Tokamak Energy, points to applications of HTS magnets in everything from space thrusters and proton-beam therapy to motors and generators for wind turbines and planes. That final application is perhaps the most intriguing as it’s very difficult for non-superconducting motors to achieve the huge power density needed for large aircraft to fly.

If you’re thinking an HTS-powered plane sounds far-fetched, it turns out that Airbus is already on the case, as are many other firms too. Over the last few years, Airbus has been developing prototype motors using this kind of technology that, to me, are a serious contender in the quest for low-carbon air travel. Through its ASCEND programme, the company has already built a 500 kW powertrain featuring an electric motor powered by the current from HTS tape.

Airbus thinks the cryogenics needed to cool the tape could be driven by the liquid hydrogen fuel that would generate the power in a fuel cell. The beauty of superconducting systems is that they’re much more efficient than conventional technology and can deliver huge power densities – pointing the way to lighter and more efficient planes.

If you think a plane powered by high-temperature superconductors sounds far-fetched, it turns out that Airbus is already on the case

There’s obviously a little more work to do before such technology can reach commercial reality. After all, getting today’s city-hopping turboprop planes off the ground using electric power alone would require around 8 MW of power. But what Airbus has done is a promising start – and reliable HTS magnets will be vital for this work to really succeed.

Another company working on the electrification of air transport is Evolito, which was spun out in 2021 by the UK firm YASA. Now owned by Mercedes-Benz, YASA is a pioneer of “axial-flux” electric motors, which have very high power densities yet don’t need to be cooled to cryogenic temperatures. YASA has already worked with Rolls-Royce to develop Spirit of Innovation, which in 2021 claimed the record for the world’s fastest electric plane, clocking a top speed of 623 km/h.

My message is simple: spin-offs and spin-outs are everywhere. So next time you have your head down and are working on something very specific, keep an open mind as to what else it could be used for – it may be more commercially relevant than you think. The applications could be even more than you ever imagined – and if you don’t believe me, just go and ask Tim Berners-Lee.

The post Fusion, the Web and electric planes: how spin-offs from big science are transforming the world appeared first on Physics World.

]]>
Opinion and reviews James McKenzie looks at some of the unexpected spin-offs from big science https://physicsworld.com/wp-content/uploads/2024/09/2024-10-Transactions-Tokamak.jpg newsletter
Heart-on-a-chip reveals impact of spaceflight on cardiac health https://physicsworld.com/a/heart-on-a-chip-reveals-impact-of-spaceflight-on-cardiac-health/ Mon, 07 Oct 2024 08:00:22 +0000 https://physicsworld.com/?p=117225 A heart-on-a-chip platform sent to the International Space Station reveals how 30 days in space alters heart muscle cells

The post Heart-on-a-chip reveals impact of spaceflight on cardiac health appeared first on Physics World.

]]>
Astronauts spending time in the harsh environment of space often experience damaging effects on their health, including a deterioration in heart function. Indeed, the landmark NASA Twins Study found that an astronaut who spent a year on the International Space Station (ISS) had significantly increased cardiac output and reduced arterial pressure compared with his identical twin who remained on Earth. And with missions planned to Mars and beyond, there’s an urgent need to understand how long-duration spaceflight affects the cardiovascular system.

With this aim, a research team headed up at Johns Hopkins University has sent a heart-on-a-chip platform to the International Space Station and investigated the impact of 30 days in space on the cardiac cells within. The findings, reported in the Proceedings of the National Academy of Sciences, could also shed light on the changes in heart structure and function that occur naturally due to ageing.

“I began cardiac research after my own father died of heart disease when I was a senior college student, and my main motivation for studying the effects of spaceflight on cardiac cells stemmed from the striking resemblance between cardiac deterioration in microgravity and the ageing process on Earth,” project leader Deok-Ho Kim tells Physics World. “The ability to counteract the impacts of microgravity on cardiac function will be essential for prolonged duration human spaceflights, and may lead to therapies for ageing hearts on Earth.”

Engineered heart tissues

The heart-on-a-chip platform is based on engineered heart tissues (EHTs), in which heart muscle cells (cardiomyocytes) derived from human-induced pluripotent stem cells are cultured within a hydrogel scaffold. The key advantage of this design over previous studies using 2D cultured cells is its ability to more accurately replicate human cardiac muscle tissue.

“Cells cultured on traditional 2D petri dishes do not behave as they would in the body, whereas our platform provides a physiologically relevant 3D environment that mimics in vivo conditions,” Kim explains.

Inside the platform, the EHTs are mounted between two posts, one of which is flexible and contains a small magnet that moves as the tissue contracts. Small magnetic sensors measure the changes in magnetic flux to determine tissue contraction in real time.

Designed for space

To allow culture of the cardiac cells in microgravity, Kim’s team – primarily postdoctoral fellow Jonathan Tsui – developed custom sealed tissue chambers containing six EHTs. These chambers, along with the magnetic sensors and associated electronics, were housed within a compact plate habitat that required minimal handling to maintain cell viability. “The platform was designed to be easily maintained by astronauts aboard the ISS, an important consideration as crew time is a precious resource,” says Kim.

The tissue chambers were carefully transported by Tsui to the Kennedy Space Center, then launched to the ISS aboard the SpaceX CRS-20 mission in March 2020. The researchers then monitored the function of the cardiac tissues for 30 days in microgravity, using the sensors to automatically detect magnet motion as the cells beat. The raw data were transmitted down from the ISS and converted into force and frequency measurements that provided insight into the contraction strength and beating patterns, respectively.

Once the samples were back on Earth, the researchers examined the cardiac tissues during a nine-day recovery period. They compared their findings with results from an identical set of EHTs cultured on Earth for the same duration.

Cardiac impact

After 12 days on the ISS, the EHTs exhibited a significant decrease in contraction strength compared with both baseline values and the control EHTs on Earth. This reduction persisted throughout the experiment and during the recovery period on Earth. The cardiac tissues also exhibited increased incidences of arrhythmia (irregular heart rhythm) whilst on the ISS, although this resolved once back on Earth.

At the end of the experiment (day 39), Kim and colleagues examined the cardiac tissue using transmission electron microscopy. They found that spaceflight caused sarcomeres (protein bundles that help muscle cells contract) to become shorter and more disordered – a marker of human heart disease. The changes did not resolve after return to Earth and may be why the cardiac tissues did not regain contraction strength in the recovery period. The team also observed mitochondrial damage in the cells, including fragmentation, swelling and abnormal structural changes.

To further assess the impact of prolonged microgravity, the researchers performed RNA sequencing on the returned tissue samples. They observed up-regulation of genes associated with metabolic disorders, heart failure, oxidative stress and inflammation, as well as down-regulation of genes related to contractility and calcium signalling. Finally, they used in silico modelling to determine that spaceflight-induced oxidative stress and mitochondrial dysfunction were key to the tissue damage and cardiac dysfunction seen in space-flown EHTs.

“By conducting a detailed investigation into cellular changes under real microgravity conditions, we aimed to uncover the mechanisms behind these alterations, potentially leading to therapies that could benefit both astronauts and the ageing population,” says Kim.

Last year, the researchers sent a second batch of EHTs to the ISS to screen drugs that may protect against the effects of low gravity. They are currently analysing the data from these studies. “These results will help us refine the effectiveness of promising drug therapies for our upcoming third mission,” says Kim.

The post Heart-on-a-chip reveals impact of spaceflight on cardiac health appeared first on Physics World.

]]>
Research update A heart-on-a-chip platform sent to the International Space Station reveals how 30 days in space alters heart muscle cells https://physicsworld.com/wp-content/uploads/2024/10/7-10-24-Heart-on-a-chip-Tsui-Countryman.jpg newsletter1
Study finds preschool children form ‘social droplets’ when moving around the classroom https://physicsworld.com/a/study-finds-preschool-children-form-social-droplets-when-moving-around-the-classroom/ Sat, 05 Oct 2024 09:00:18 +0000 https://physicsworld.com/?p=117253 The movement of preschool children results in two distinct phases, find study

The post Study finds preschool children form ‘social droplets’ when moving around the classroom appeared first on Physics World.

]]>
If you have ever experienced a preschool environment you will know how seemingly chaotic it can be. Now physicists in the US and Germany have examined the movement of preschool children in classroom and playground settings to determine if any rules can be gleaned from their dawdling.

To do so they put radio-frequency tags on the vests of more than 200 children aged between two and four and then monitored their position and trajectories via receivers placed around the environment.

The researchers found that the dynamics resembled two distinct phases. The first is a gas-like phase in which the children are moving freely while exploring their surroundings.

This was mostly seen in the playground where children could roam without restriction, with the researchers finding that toddlers’ movement is similar to that of pedestrian flow.

The second phase is a “liquid-vapour-like state”, in which the children act like molecules to form “droplets” of social groups. In other words, they coalesce into smaller, more clustered groups with some “free-moving” individuals entering and exiting these groups.

The team found that this phase was most evident in classrooms, in which the children are more constrained and social communication plays a bigger role. Indeed, this type of behaviour has not been observed in human movement before, with the findings offering new insights about the dynamics of low-speed movement.

The post Study finds preschool children form ‘social droplets’ when moving around the classroom appeared first on Physics World.

]]>
Blog The movement of preschool children results in two distinct phases, find study https://physicsworld.com/wp-content/uploads/2024/10/young-children-play-outside-794064370-Shutterstock_Monkey-Business-Images.jpg
Silk-on-graphene films line up for next-generation bioelectronics https://physicsworld.com/a/silk-on-graphene-films-line-up-for-next-generation-bioelectronics/ Fri, 04 Oct 2024 12:30:26 +0000 https://physicsworld.com/?p=117203 Researchers have grown a uniform two-dimensional layer of silk protein fragments on a van der Waals substrate for the first time

The post Silk-on-graphene films line up for next-generation bioelectronics appeared first on Physics World.

]]>
Researchers have succeeded in growing a uniform 2D layer of silk protein fragments on a van der Waals substrate – in this case, graphene – for the first time. The feat should prove important for developing silk-based electronics, which have been limited until now because of the difficulty in controlling the inherent disorder of the fibrillar silk architecture.

Silk is a protein-based material that humans have been using for over 5000 years. In recent years, researchers have been looking to exploit one of its two main components, silk fibroin (which is made up of protein fragments), in electronic and bioelectronic applications. This is because it can self-assemble into a range of fibril-based architectures that boast excellent mechanical and optical properties. Indeed, devices in which silk fibroin films are interfaced with van der Waals solids, metals or oxides appear to be particularly promising for making next-generation thin-film transistors, memory transistors (or memristors), human–machine interfaces and sensors.

There is a problem, however, in that silk cannot be used in its natural form for such devices because its fibres are arranged in a disordered, tangled fashion. This means it cannot uniformly or accurately modulate electronic signals.

Controlling natural disorder

A team of researchers, led by materials scientist and engineer James De Yoreo of the PNNL and the University of Washington, has now found a way to control this disorder. In their work, the researchers grew highly organized 2D films of silk fibroins on graphene, a highly conducting sheet of carbon just one atom thick.

Using atomic force microscopy, nano-Fourier transform infrared spectroscopy and molecular dynamics calculations, the researchers observed that the films consist of stable lamellae of silk fibroin molecules that have the same structure as the nano-crystallites of natural silk. The fibroins pack in precise parallel beta-sheets – a common protein shape found in nature – on this substrate.

Thanks to scanning Kelvin probe measurements, De Yoreo and colleagues also found that the films modulate the electric potential of the graphene substrate’s surface.

The researchers say that they took advantage of the inherent interactions of the silk molecules with the substrate and its crystallinity to force the silk molecules to assemble into a crystalline layer at the interface between the two materials. They then regulated the concentration of the aqueous solution in which the silk proteins had been dissolved to limit the number of silk layers that form. In this way, they were able to assemble single monolayers, bilayers or much thicker multilayers.

Uniform properties

Since the material is highly ordered, its properties are uniform, says De Yereo. What’s more, because of the strong intermolecular interactions in the beta-sheet arrangement and the strong interactions with the substrate, it is highly stable. “In its pure state, it can regulate the surface potential of the underlying conductive substrate, but there are techniques for doping silk to introduce both optical and electronic properties that can greatly expand its useful properties,” he explains.

The researchers hope their results will help in the development of 2D bioelectronic devices that exploit natural silk-based layers chemically modified to provide different electronic functions. They also plan to use their starting material to create purely synthetic silk-like layers assembled out of artificial, sequence-defined polymers that mimic the amino acid sequence of the silk molecule. “In particular, we see potential for using these materials in memristors, for computing based on neural networks,” De Yereo tells Physics World. “These are networks that could allow computers to mimic how the brain functions.”

It is important to note that the system developed in this work is nontoxic and water-based, which is crucial for biocompatibility, adds the study’s lead author Chenyang Shi.

The research is detailed in Science Advances.

The post Silk-on-graphene films line up for next-generation bioelectronics appeared first on Physics World.

]]>
Research update Researchers have grown a uniform two-dimensional layer of silk protein fragments on a van der Waals substrate for the first time https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_2D-silk-hero-image.jpg newsletter1
‘Sometimes nature will surprise us.’ Juan Pedro Ochoa-Ricoux on eureka moments and the future of neutrino physics https://physicsworld.com/a/sometimes-nature-will-surprise-us-juan-pedro-ochoa-ricoux-on-eureka-moments-and-the-future-of-neutrino-physics/ Fri, 04 Oct 2024 10:00:10 +0000 https://physicsworld.com/?p=116939 Particle physicist Juan Pedro Ochoa-Ricoux talks about how the next generation of neutrino experiments will test the boundaries of the Standard Model

The post ‘Sometimes nature will surprise us.’ Juan Pedro Ochoa-Ricoux on eureka moments and the future of neutrino physics appeared first on Physics World.

]]>
It was a once-in-a-lifetime moment during a meeting in 2011 when Juan Pedro Ochoa-Ricoux realized that new physics was emerging in front of his eyes. He was a postdoc at the Lawrence Berkeley National Laboratory in the US, working on the Daya Bay Reactor Neutrino Experiment in China. The team was looking at their first results when they realized that some of their antineutrinos were missing.

Ochoa-Ricoux has been searching for the secrets of neutrinos since he began his master’s degree at the California Institute of Technology (Caltech) in the US in 2003. He then completed his PhD, also at Caltech, in 2009, and is now a professor at the University of California Irvine, where neutrinos are still the focus of his research.

The neutrino’s non-zero mass directly conflicts with the Standard Model of particle physics, which is exciting news for particle physicists like Ochoa-Ricoux. “We actually like it when the theory doesn’t match the experiment,” he jokes, adding that his motivation for studying these elusive particles is for the new physics they could reveal. “We need to know how to extend [the Standard Model] and neutrinos are one area where we know it has to be extended.”

Because they rarely interact with matter, neutrinos are notoriously hard to study. Electron antineutrinos are however produced in measurable quantities by nuclear reactors and this is what Daya Bay was measuring. The experiment consisted of eight detectors measuring the electron antineutrino flux at different distances from six nuclear reactors. As the antineutrinos disperse, the detectors further away are expected to measure a smaller signal than those close by.

However, when Ochoa-Ricoux and his team analysed their results, they found “a deficit in the far location that could not only be explained by the fact that those detectors were farther away”. Neutrinos come in three types, or “flavours”, and it seemed that some of the electron antineutrinos produced in the power plants were changing into tau and muon antineutrinos, meaning the detector didn’t pick them up.

This transformation of neutrino type, also known as “oscillation”, occurs for both neutrinos and antineutrinos. It was first observed in 1998, with the discovery leading to the award of the 2015 Nobel Prize for Physics. However, physicists are still not sure if antineutrinos and neutrinos oscillate in the same way. If they don’t, that could explain why there is more matter than antimatter in the universe.

The mathematics of neutrino oscillation is complex. Among many parameters, physicists define an angle called θ13, which plays a role in determining the probability of certain flavour oscillations. For differences in oscillation probabilities between neutrinos and antineutrinos to be possible, this quantity must be non-zero. When Ochoa-Ricoux was working on the Main Injector Neutrino Oscillation Search (MINOS) at Fermilab in the US for his PhD, he had found tantalizing but inconclusive evidence that θ13 is different from zero.

Juan Pedro Ochoa–Ricoux at the JUNO Observatory

The memorable meeting Ochoa-Ricoux recalled at the start of this article was, however, the first moment he realized “Oh, this is real”. Their antineutrino deficit data eventually proved that the angle is about nine degrees. This discovery set the stage for Ochoa-Ricoux’s career, which, a little like the oscillating neutrino, he describes as a “mixture of everything”.

The asymmetry between antimatter and matter is one of the biggest mysteries in physics and in the next four years, two experiments – HyperKamiokande in Japan and the Deep Underground Neutrino Experiment (DUNE) in the US – will start looking for evidence of matter–antimatter asymmetry in neutrino oscillation (Ochoa-Ricoux is a member of DUNE). “Had θ13 been zero” he says, “my job and my life would have been very very different”.

All hands on deck

On the one hand you analyse the data, but before you can do that, you actually have to build the apparatus

Ochoa-Ricoux wasn’t just analysing the results from Daya Bay, he was also assembling and testing the experiment. This was sometimes frustrating work – he remembers having to painstakingly remeasure detector components because they wouldn’t fit inside the machine. But he emphasizes that this was an important part of the Daya Bay discovery. “On the one hand you analyse the data, but before you can do that, you actually have to build the apparatus,” he says.

While Ochoa-Ricoux now spends much less time climbing inside detector equipment, he is actively involved in designing the next generation of neutrino experiments. As well as DUNE, he works on Daya Bay’s successor, the Jiangmen Underground Neutrino Observatory (JUNO) in China, a nuclear reactor experiment that is projected to start taking data at the end of the year.

The first neutrino oscillation measurement was made in 1998 by the Japanese researcher Takaaki Kajita, who would later share the 2015 Nobel Prize for Physics for his work. However, the experiment where Kajita made this observation, called SuperKamiokande, was originally designed to search for proton decay.

Ochoa-Ricoux thinks that DUNE and JUNO need to be open to finding something equally unexpected. JUNO’s main aim is to determine which neutrino mass is the heaviest by measuring oscillating antineutrinos from nuclear power plants. It will also detect neutrinos coming from the Sun or the atmosphere, and Ochoa-Ricoux thinks this flexibility is vital.

“Sometimes nature will surprise us and we need to be ready for that,” he says, “I think we need to design our experiments in such a way that we can be sensitive to those surprises.”

Exploring the unknown

Experiments like DUNE and JUNO could change our understanding of the universe, but there is no guarantee that neutrinos hold the key to mysteries like matter–antimatter asymmetry. There’s therefore pressure to deliver results, but Ochoa-Ricoux is excited that the field is taking leaps into the unknown.

When you understand your world better, sometimes it’s impossible to predict what applications will come

He also argues that as well as advancing fundamental science, these projects could lead to new technologies. Medical imaging devices like MRI and PET scanners are offshoots of particle physics and he believes that “When you understand your world better, sometimes it’s impossible to predict what applications will come.”

However, at the heart of Ochoa-Ricoux’s mindset is the same fascination with the mysteries of the universe that motivated him to pursue neutrino physics as a student. For him, projects like JUNO and DUNE can justify themselves on those grounds alone. “We’re humans. We need to understand the world we live in. I think that’s highly valuable.”

The post ‘Sometimes nature will surprise us.’ Juan Pedro Ochoa-Ricoux on eureka moments and the future of neutrino physics appeared first on Physics World.

]]>
Feature Particle physicist Juan Pedro Ochoa-Ricoux talks about how the next generation of neutrino experiments will test the boundaries of the Standard Model https://physicsworld.com/wp-content/uploads/2024/10/2024-09-Careers-Ochoa-Ricoux-dome.jpg newsletter
Gender gap in physics entrenched by biased collaboration networks, study finds https://physicsworld.com/a/gender-gap-in-physics-entrenched-by-biased-collaboration-networks-study-finds/ Fri, 04 Oct 2024 08:00:00 +0000 https://physicsworld.com/?p=117163 Interventions to integrate young female physicists into established networks could help tackle under-representation

The post Gender gap in physics entrenched by biased collaboration networks, study finds appeared first on Physics World.

]]>
Biased collaboration and citation patterns are responsible for driving the gender gap in physics. That is according to a new study, which finds that poor female representation persists due to established male physicists preferring to work with early-career male researchers. The study’s authors say that integrating young female physicists into established networks could help to tackle the under-representation of women (Communications Physics 7 309).

The gender gap in physics is one of the largest in science and recent research suggests that it could take couple of centuries until there are equal numbers of senior male and female physicists.

Keen to understand the network dynamics behind the gap, Fariba Karimi at the Complexity Science Hub in Austria and colleagues analysed 668,028 papers published in American Physical Society journals between 1893 to 2020 and 8.5 million citations.

They deduced with “high confidence” the genders of 136,598 first authors in the APS dataset and used this data to construct citation and co-authorship networks.

Despite rising overall numbers of female physicists and female-led papers, the authors find that the ratio of male to female first authors and researchers has remained stable for decades. In fact, the gender gap in absolute numbers appears to be growing.

The researchers then developed a model of the citation and co-authorship networks to explore how the “adoption” of new members by established members impacts network growth.

Small changes

The model focused on two mechanisms. One is “asymmetric mixing” – the inclination of people to adopt people like themselves. The other is general preferential attachment, or the idea that established network members attract more connections.

The model mirrors real-world dynamics and shows that these mechanisms and adoption behaviours cause group ratio inequalities to persist. In the case of physicists, the gender imbalance continues because male physicists are more likely to collaborate with and cite their male counterparts.

Compared with women, men entering the network are more likely to be adopted by those who are already well established in the network, which tends to be men. This trend has been shown elsewhere with research in 2022 finding that male-led papers are more likely to cite male-led work.

The team then used their model to show how small changes to a two-group system can alter the group balance. They find that if the simulation’s mixing values – such as adoption behaviours – are altered slightly in favour of a smaller, less dominant group, that group’s size quickly catches up with that of the dominant group.

Karimi says that it is “not just about having more women” but also about how they are integrated into networks. “In real systems, it’s not as simple as someone coming and connecting to others in a network,” adds Karimi. “It is also a matter of who takes in the newcomer and adopts him or her into their personal network.”

To alter the network dynamics, the study authors suggest interventions such as creating opportunities for junior women to collaborate with senior men and giving female researchers more opportunities for funding and promotion. “If we don’t take these interventions soon, this gap will not close very easily,” says Karimi.

The post Gender gap in physics entrenched by biased collaboration networks, study finds appeared first on Physics World.

]]>
News Interventions to integrate young female physicists into established networks could help tackle under-representation https://physicsworld.com/wp-content/uploads/2024/10/people-connections-management-1264228218-iStock_cagkansayin.jpg newsletter1
Nobel predictions and humorous encounters with physics laureates https://physicsworld.com/a/nobel-predictions-and-humorous-encounters-with-physics-laureates/ Thu, 03 Oct 2024 14:01:12 +0000 https://physicsworld.com/?p=116906 Physics World editors gaze into their crystal ball and reminisce about past Nobel winners

The post Nobel predictions and humorous encounters with physics laureates appeared first on Physics World.

]]>
In this episode of the Physics World Weekly podcast, our very own Matin Durrani and Hamish Johnston explain why they think that this year’s Nobel Prize for Physics could be awarded for work in condensed-matter physics – and who could be in the running. They also reminisce about some of the many Nobel laureates that they have met over the years and the excitement that comes every October when the winners are announced.

 

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Nobel predictions and humorous encounters with physics laureates appeared first on Physics World.

]]>
Podcasts Physics World editors gaze into their crystal ball and reminisce about past Nobel winners https://physicsworld.com/wp-content/uploads/2024/10/Matin-and-Hamish.jpg newsletter
Celebrating with a new Nobel laureate in Canada’s ‘Steeltown’ https://physicsworld.com/a/celebrating-with-a-new-nobel-laureate-in-canadas-steeltown/ Thu, 03 Oct 2024 14:00:23 +0000 https://physicsworld.com/?p=116904 The magical day Bertram Brockhouse won his prize

The post Celebrating with a new Nobel laureate in Canada’s ‘Steeltown’ appeared first on Physics World.

]]>
For nearly two decades I have been covering the Nobel prize for Physics World and every October I tune into to the announcement that’s made live from Stockholm. But, the frisson that I feel with each announcement brings me straight back to a day 30 years ago when Bertram Brockhouse bagged the award.

Three decades ago I was living in Hamilton, an industrial city at the western end of Lake Ontario. About 70 km from downtown Toronto and staunchly blue collar, Hamilton was famous for its smoke-belching steel mills and its beloved Tiger-Cats of the Canadian Football League. In addition to steel, the city has been home to myriad manufacturing companies and in the days of Empire it had been dubbed the “Birmingham of Canada”.

So it’s safe to say that Hamilton in the 1990s was not the sort of place where you would expect to run into a Nobel laureate.

But that changed one day in October 1994. I began that day listening to a news bulletin on CBC radio – and the lead item was that the Canadian physicist Bertram Brockhouse had won half of the 1994 Nobel Prize for Physics for his pioneering work on inelastic neutron scattering.

In 1994 Brockhouse was an emeritus professor of physics at McMaster University in Hamilton – where I was doing a PhD. What’s more, I had been an undergraduate intern at Chalk River Laboratories, where I worked at the Neutron Physics Branch – which was founded by Brockhouse in 1960 before he left for McMaster.

“Son of a gun”

Needless to say, I was very excited to get to the physics department and join in the celebrations that morning. And I was not disappointed. As I arrived, the normally mild-mannered theorist Jules Carbotte was skipping along the corridor shouting “Bert Brockhouse, son of a gun” as he punched the air.

I don’t remember seeing Brockhouse that day, but everyone else was in very good spirits. Indeed, it was the start of celebrations at the university that seemed very inclusive to me – with faculty, students and members of the wider community invited to what seemed like endless parties and receptions. This was understandable because Brockhouse was McMaster’s first Nobel prize winner. There have been three more since – including another in physics, with the 2018 laureate Donna Strickland having done her degree in engineering physics at McMaster.

At one of those receptions I was introduced to Brockhouse and discovered that he lived in one of my favourite parts of Hamilton – a semi-rural and heavily-wooded portion of the Niagara Escarpment nestled between the former towns of Ancaster and Dundas. Instead of talking about neutrons, I believe we chatted about the growing number of deer in the area and how they were wreaking havoc in people’s gardens.

Coffee lounge gang

Brockhouse had retired a decade earlier, but he was often at the university where he shared a small office with other emeriti professors – a gang that I would often see in the coffee lounge. As I recall, he was very quickly given an office of his own (and perhaps a personal assistant) to help him cope with his new fame.

While writing this piece, I was surprised to discover that Brockhouse was just 76 when he bagged his Nobel for work he had done 40 years previously. Perhaps because 30 years have passed, 76 no longer seems old to me – but I don’t think this is just my perception. Today, as mandatory retirement fades into the past and people are encouraged to remain physically and mentally active, 76 is not that old for a working physicist. Many people that age and older continue to make important contributions to physics.

Indeed, one of Brockhouse’s colleagues at McMaster – Tom Timusk – remains active in research into his 90s. In 2003 Timusk published an obituary of Brockhouse in Nature and it reminded me of what Brockhouse said to a gathering of students after he won the prize: “I used to think that my work was not important, but recently I have had to change my mind.”

How nice to be able look back on one’s work and find value. I suspect that like Brockhouse, many people underestimate their contributions to the greater good. But unlike, Brockhouse, some will never stand corrected.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Celebrating with a new Nobel laureate in Canada’s ‘Steeltown’ appeared first on Physics World.

]]>
Blog The magical day Bertram Brockhouse won his prize https://physicsworld.com/wp-content/uploads/2024/10/3-10-24-Brockhouse.jpg
Camera takes inspiration from cats’ eyes to improve imaging performance https://physicsworld.com/a/camera-takes-inspiration-from-cats-eyes-to-improve-imaging-performance/ Thu, 03 Oct 2024 12:00:23 +0000 https://physicsworld.com/?p=117174 Device might be employed in applications such as autonomous vehicles, drones and surveillance systems

The post Camera takes inspiration from cats’ eyes to improve imaging performance appeared first on Physics World.

]]>
Features of feline eyes

A novel camera inspired by structures within cats’ eyes could be employed in autonomous vehicles, drones and surveillance systems – applications where precise object detection in varied light conditions and complex backgrounds is critical.

One key feature of the new device is the use of a vertically elongated slit, like the pupils of cats’ eyes, which are different from those of other mammals, explains Minseok Kim of the Gwangju Institute of Science and Technology in Korea. As in a cat’s eye, this pupil creates an asymmetric depth of focus when it dilates and contracts, allowing the camera to blur out backgrounds and focus sharply on objects. Another feature is a metal reflector that enables more efficient light absorption in low-light settings. This mimics the tapetum lucidum, a mirror-like structure that gives cats’ eyes their characteristic glow. It reflects incident light back into the retina, allowing it to amplify light.

“The result is a camera that works well in both bright and low-light environments, allowing it to capture high-sensitivity images without the need for complex software post-processing,” Kim says.

Mimicking animal eyes

Kim and colleagues have been working on mimicking the eyes of various animals for several years. Some of their recent studies include structures inspired by fish eyes, fiddler crab eyes, cuttlefish eyes and avian eyes. They decided to work on this latest project with the aim of overcoming the limitations of current cameras systems, in particular, their difficulty in handling very low or very bright lighting conditions. They also wanted to do away with the post-processing image software required to better distinguish objects from their backgrounds.

One of the main difficulties that the researchers had to overcome in this study was to simplify the intricate structure of the tapetum lucidum. Instead of replicating it exactly, they used a metal reflector placed beneath a hemispherical silicon photodiode array, which reduces excessive light and enhances photosensitivity. This design allows for clear focusing under bright light and improved sensitivity in dim conditions.

“Another challenge was to create a vertical pupil that could mimic the cat’s ability to focus sharply on an object while blurring the background,” says Kim. “We were able to construct the vertical aperture using a 3D printer, but our future work will focus on making this pupil dynamic so it can automatically adjust its size in response to changing light conditions.”

Many application areas

The research could significantly improve technologies that rely on high-performance imaging in difficult lighting conditions, Kim tells Physics World. The team expects the system to be highly useful in autonomous vehicles, where precise object detection is critical for safe navigation.

“It could also be applied to drones and surveillance systems that operate in various lighting environments, as well as in military applications where camouflage-breaking capabilities are essential,” Kim adds. “The system could also find use in medical imaging, where the ability to capture high-sensitivity, real-time images without extensive software processing is crucial.”

The researchers now plan to further optimize their camera’s pixel density – which they admit is quite low at the moment – and its resolution to improve image quality. “We also aim to conduct more real-world tests, particularly in applications such as autonomous driving and robotic surveillance, to evaluate how the system performs in practical settings,” says Kim. “Lastly, we are looking into binocular object recognition systems so that the camera can handle more complex visual tasks.”

The study is detailed in Science Advances.

The post Camera takes inspiration from cats’ eyes to improve imaging performance appeared first on Physics World.

]]>
Research update Device might be employed in applications such as autonomous vehicles, drones and surveillance systems https://physicsworld.com/wp-content/uploads/2024/10/03-10-24-cats-eyes-camera-featured.jpg
Robert Laughlin: the Nobel interview that became an impromptu press conference https://physicsworld.com/a/robert-laughlin-the-nobel-interview-that-became-an-impromptu-press-conference/ Thu, 03 Oct 2024 09:13:31 +0000 https://physicsworld.com/?p=116903 Matin Durrani winces at the time he met Nobel laureate Robert Laughlin

The post Robert Laughlin: the Nobel interview that became an impromptu press conference appeared first on Physics World.

]]>
As a science journalist, some interviews you do go well, some don’t, but at least they usually have a distinct start and end. That wasn’t the case with Robert Laughlin, whom I once met at the annual Lindau conference for Nobel-prize-winners in Germany.

Most of the conference involves Nobel laureates giving lectures to a select band of PhD students from around the world. But Laughlin, who’d shared the 1998 Nobel Prize for Physics for his work on quantum fluids with fractional charges, had agreed to speak to me in a private room at the conference venue on the shores of Lake Constance.

Things started sensibly enough (he was ostensibly talking about a new book he was writing) but after about 20 minutes, a conference official barged in.

There’d be an over-booking and no, we weren’t allowed to stay. We were two people in the wrong place at the wrong time – and the fact that one of us was a Nobel-prize-winning physicist didn’t cut any mustard. Out we went.

Laughlin and I packed up our stuff and reconvened at an outside terrace in the summer sun, where we tried to pick up the thread of our conversation.

Now, laureates like Laughlin are the big draw of the Lindau conference – in fact, they’re the whole reason the meeting takes place. If Lindau were a music festival, they’d be the artists everyone’s come to see.

Before I knew what was going on, first one then two then three students had sidled up to our table. Like electrons around a nucleus, they’d been attracted by the presence of a Nobel laureate and weren’t going to miss out.

Laughlin didn’t appear fazed by the unexpected turn of events; in fact, I’m sure Nobel laureates love nothing better than being the centre of attention. Within minutes, the entire table had been surrounded by a phalanx of hangers-on.

Our one-to-one interview had become an impromptu one-man press conference with me seemingly serving as Laughlin’s minder. As he held court to his gaggle of fawning students, apparently oblivious that I was still there, Laughlin was in his element.

Laughlin probably doesn’t remember the encounter: Nobel laureates, who are the only real celebrities in physics, meet hundreds of people all the time. The students, however, appeared to be enjoying themselves, so the conference organizers must have been happy.

But I just ended up squirming in my seat. I put my notebook back in my bag and let Laughlin take over.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Robert Laughlin: the Nobel interview that became an impromptu press conference appeared first on Physics World.

]]>
Blog Matin Durrani winces at the time he met Nobel laureate Robert Laughlin https://physicsworld.com/wp-content/uploads/2024/10/DURRANI-Laughlin.jpg
Steven Weinberg: the Nobel laureate who liked nuts https://physicsworld.com/a/steven-weinberg-the-nobel-laureate-who-liked-nuts/ Wed, 02 Oct 2024 14:00:31 +0000 https://physicsworld.com/?p=116902 Matin Durrani recounts a one-sided interview with Steven Weinberg

The post Steven Weinberg: the Nobel laureate who liked nuts appeared first on Physics World.

]]>
Steven Weinberg

It was 2003 and Steven Weinberg was sitting with me in the lobby of a hotel in Geneva, explaining his research into fundamental physics, when he paused to grab a handful of peanuts from a bowl on the table in front of us.

I had been speaking to Weinberg as he’d come to Switzerland to give a lecture at CERN on the development of the Standard Model of particle physics, in which he’d played a key part, and had agreed to an interview with Physics World during a break in his schedule.

The old-fashioned Dictaphone on which I recorded our interview on has gone missing so I’ve only got a hazy recollection of what he said. I do remember that Weinberg was charming, friendly and witty, but it was pretty clear he felt he was in the company of some kind of intellectual buffoon.

Turning round, he asked me: “Do you like nuts?”

You see, the only time Weinberg properly interacted with me was to reveal how he enjoyed those little bags of nuts you get on plane journeys (he was obviously used to flying business class); it was then that he wanted my view of them too. It was as if Weinberg doubted I could handle anything deeper than airline snacks and was just trying to be kind.

That’s what happens when you an interview a Nobel laureate. Apart from them enjoying the sound of their own voice, they obviously know they know several orders of magnitude more than you do about their specialist subject.

You’re left squirming and feeling ever so slightly inadequate, trying to absorb a whirlwind of high-level information while at the same time desperately wondering what your next question should be.

His opinion of me certainly must have dipped further a few weeks later. Despite some misgivings, I decided to write up our interview and e-mail Weinberg my draft, which covered his life, research and career.

Stupidly, I’d made a few schoolboy errors near the start, prompting Weinberg to write back, explaining he didn’t have the time or energy to check my nonsense any further (I paraphrase slightly) and, no, he wasn’t going to spend time pointing out my mistakes either.

At least Weinberg was polite, which is more than you could say for the late Subrahmanyan Chandrasekhar, who shared the 1983 Nobel Prize for Physics for his theoretical work on the structure and evolution of stars. Robert P Crease takes up the story in this memorable article.

SmarAct Group logo

SmarAct proudly supports Physics World‘s Nobel Prize coverage, advancing breakthroughs in science and technology through high-precision positioning, metrology and automation. Discover how SmarAct shapes the future of innovation at smaract.com.

The post Steven Weinberg: the Nobel laureate who liked nuts appeared first on Physics World.

]]>
Blog Matin Durrani recounts a one-sided interview with Steven Weinberg https://physicsworld.com/wp-content/uploads/2021/08/Steven-Weinberg.jpg
CERN celebrates 70 years at the helm of particle physics in lavish ceremony https://physicsworld.com/a/cern-celebrates-70-years-at-the-helm-of-particle-physics-in-lavish-ceremony/ Wed, 02 Oct 2024 12:02:53 +0000 https://physicsworld.com/?p=117142 The event was attended by 38 national delegations as well as Her Royal Highness Princess Astrid of Belgium

The post CERN celebrates 70 years at the helm of particle physics in lavish ceremony appeared first on Physics World.

]]>
Officials gathered yesterday for an official ceremony to celebrate 70 years of the CERN particle-physics lab, which was founded in 1954 in Geneva less than a decade after the end of the Second World War.

The ceremony was attended by 38 national delegations including the heads of state and government from Bulgaria, Italy, Latvia, Serbia, Slovakia and Switzerland as well as Her Royal Highness Princess Astrid of Belgium and the president of the European Commission. It marked the culmination of a year of events that showcased the lab’s history and plans for the future as it looks beyond the Large Hadron Collider.

Created to foster peace between nations and bring scientists together, CERN’s origins can be traced back to 1949 when the French Nobel-prize-winning physicist Louis de Broglie first proposed the idea a European laboratory. A resolution to create the European Council for Nuclear Research (CERN) was adopted at a UNESO conference in Paris in 1951, with 11 countries signing an agreement to establish the CERN council the year after.

CERN Council met for the first time in May 1952 and in October of that year chose Geneva as the site for a 25–30 GeV proton synchrotron. The formal convention establishing CERN was signed at a meeting in Paris in 1953 by the lab’s 12 founding member states: Belgium, Denmark, France, West Germany, Greece, Italy, the Netherlands, Norway, Sweden, Switzerland, the UK and Yugoslavia.

On 29 September 1954 CERN was formed and the provisional CERN council was dissolved. That year also saw the start of construction of the lab in which the proton synchrotron, with a circumference of 628 m, accelerated protons for the first time on 24 November 1959 with an energy of 24 GeV, becoming the world’s highest-energy particle accelerator.

A proud moment

Today CERN has 23 member states with 10 associate member states. Some 17,000 people from 100 nationalities work at CERN, mostly on the LHC but the lab also does research into antimatter research and theory. CERN is now planning on building on that success through a Future Circular Collider, which if funded, would include a 91 km circumference collider to study the Higgs boson in unprecedented detail.

As part of the celebrations, this year has seen over 100 events organized in 63 cities in 28 countries. The first public event at CERN, held on 30 January, combined science, art and culture, and featured scientists discussing the evolution of particle physics and CERN’s significant contributions in advancing this field.

Other events over the past months have focused on open questions in physics and future directions; the link between fundamental science and technology; CERN’s role as a model for international collaboration; and training, education and accessibility.

The meeting yesterday, the culmination of this year-long celebration, was held in the auditorium of CERN’s Science Gateway, which was inaugurated in October 2023.

“CERN is a great success for Europe and its global partners, and our founders would be very proud to see what CERN has accomplished over the seven decades of its life,” noted CERN director general Fabiola Gianotti. “The aspirations and values that motivated those founders remain firmly anchored in our organization today: the pursuit of scientific knowledge and technological developments for the benefit of humanity; training and education; collaboration across borders, diversity and inclusion; knowledge, technology and education accessible to society at no cost; and a great dose of boldness and determination to pursue paths that border on the impossible.”

The post CERN celebrates 70 years at the helm of particle physics in lavish ceremony appeared first on Physics World.

]]>
News The event was attended by 38 national delegations as well as Her Royal Highness Princess Astrid of Belgium https://physicsworld.com/wp-content/uploads/2024/10/10-Family-photo-CERN-small.jpg newsletter1
Rambling tour of Europe explores the backstory of the Scientific Revolution https://physicsworld.com/a/rambling-tour-of-europe-explores-the-backstory-of-the-scientific-revolution/ Wed, 02 Oct 2024 10:00:28 +0000 https://physicsworld.com/?p=116825 Victoria Atkinson reviews Inside the Stargazer’s Palace by Violet Moller

The post Rambling tour of Europe explores the backstory of the Scientific Revolution appeared first on Physics World.

]]>
Sixteenth-century Europe was a place of great change. Religious upheaval swept the continent, empires expanded and the mystic practices of the medieval world slowly began shifting toward modern science.

Copernicus’s heliocentric model of the universe, introduced in 1543, is often considered the origin of this so-called “Scientific Revolution”. However, with her latest book Inside the Stargazer’s Palace: the Transformation of Science in 16th-Century Northern Europe, historian and writer Violet Moller gives the story behind this transformation, putting lesser-known figures at the fore. She looks at the effect of religious and geopolitical events in northern Europe, starting from the late 15th century, and shows how the scholars of this period drew together strands of scientific thought that had been developing for decades.

Beginning in the German town of Nuremberg in 1471, the book is a sweeping tour of the continent, visiting the ancient university city of Louvain in what is now Belgium, the London suburb of Mortlake, Kassel in Germany and the formerly Danish island of Hven. She concludes this journey in Prague with the deposition of the Holy Roman Emperor and scientific patron Rudolf II in 1611, an event that broke apart Europe’s flourishing community of scientific minds.

As a scientist, I was disappointed to find the book fairly light on scientific detail. Inside the Stargazer’s Palace is first and foremost a history book, but I felt that some more scientific context would help most readers grasp the significance of the events Moller describes.

Nonetheless, it was fascinating to see how politics and economics across the continent shaped scientific study. In the 15th century, the scientific community in northern Europe was exceedingly small, with scholarly knowledge restricted to those who could travel to the great knowledge centres in Italy, Greece and beyond. However, the development of the printing press in 1440 and the founding of the first scientific print house in Nuremberg changed the way information was shared forever. As scientific knowledge became more accessible, interest in understanding the natural world began to grow.

Through the closely connected tales of a number of individuals – from cartographer and instrument maker Gemma Frisius to the renowned alchemist Tycho Brahe – we see the beginnings of a scientific community. As Moller says, “Everyone, it seems, knew everyone,” with theories, techniques and instruments shared across a growing network of enthusiastic practitioners.

The development of the printing press mid-century and the founding of the first scientific print house in Nuremberg changed the way information was shared forever

This complexity did not come without its challenges. Moller introduces so many significant figures, each with their own niche, that by chapter four it’s difficult to keep track of who everyone is. The emphasis on personal stories also creates a slightly muddled narrative. In the introduction, Moller tells us “This narrative is based around places,” but at times the location seems incidental at best, if not entirely irrelevant. For example, chapter five ostensibly focuses on the Danish (now Swedish) island of Hven, home to Tycho Brahe. However, over the first 20 pages, we instead follow Brahe on his travels around Europe, and the description of his famous castle-come-laboratory Uraniborg at the end of the chapter feels rather compressed. Other locations, notably Kassel and Prague, are only relevant during the lifetime of a single enthusiastic patron, begging the question of whether it was the place or the person that really mattered.

Despite this sometimes rambling focus, Moller expertly guides the reader through the significant cultural and political events of the century. Beginning in the 1510s, the spread of Lutheranism across Europe brought with it an intellectual revolution, with its fiercest proponents encouraging followers to “think in innovative ways … and focus on praising God through studying his creation”. The conflict between the new Protestant denominations and the traditional Catholic faith drove the migration of great minds, who converged on the places most supportive of their scientific endeavours.

During this period, new observations also directly challenged long-held beliefs. In the early 16th century, astronomy and astrology were one and the same, and astrological predictions underpinned everything from medicine to political decisions. However, a series of astronomical phenomena towards the end of the century – the appearance of a new star in 1572 (later confirmed as a supernova), a comet in 1577, and the conjunction of Saturn and Jupiter in 1583 – triggered a shift away from divinatory thinking in the following decades. Measurements made from these observations conflicted with accepted theories about the universe, showing that the stars and planets were much further away than previously thought.

The discussion of these phenomena is a welcome one, introducing one of surprisingly few scientific details in the book. We are still left to guess many of the basic particulars of this scientific study: what was being measured and how, and why the results were significant. Moller instead provides a list of instruments – astrolabes, quadrants, sextants, torquetums and astronomy rings – with little or no explanation of what they are or how they work.

Moller is a historian, specializing in 16th-century England, so perhaps these subjects are beyond the scope of her expertise. However, a further frustration is the almost exclusive focus on astronomy; there is scant mention of other topics such as alchemy or botany, although this was promised by the book’s synopsis. Occasionally it also seems that Moller indulges her personal enthusiasm over the needs of the reader, placing an undue emphasis on inconsequential details and characters – John Dee, for example, continues to crop up long after his relevant contributions have passed.

The lack of scientific detail and loose focus made this a sometimes frustrating read. However, I can see that for non-scientists and those who prefer a more fluid approach, the book presents an intriguing alternative view of the Scientific Revolution. By the end of Inside the Stargazer’s Palace and, correspondingly, the 16th century, the stage has been set for the discoveries to come, but it feels like we’ve taken a circuitous route to get there.

  • 2024 Oneworld 304pp £25.00hb

The post Rambling tour of Europe explores the backstory of the Scientific Revolution appeared first on Physics World.

]]>
Opinion and reviews Victoria Atkinson reviews Inside the Stargazer’s Palace by Violet Moller https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Atkinson_stargazer_iStock_BlackAperture.jpg newsletter
Nuclear clock ticks ever closer https://physicsworld.com/a/nuclear-clock-ticks-ever-closer/ Wed, 02 Oct 2024 08:30:49 +0000 https://physicsworld.com/?p=117098 New device could not only be the best time-keeper ever, it could also revolutionize fundamental physics studies

The post Nuclear clock ticks ever closer appeared first on Physics World.

]]>
Could a new type of clock potentially be more accurate than today’s best optical atomic clocks? Such a device is now nearing reality, thanks to new work by researchers at JILA and their collaborators who have successfully built all the elements necessary for a fully functioning nuclear clock. The clock might not only outperform the best time-keepers today, it could also revolutionize fundamental physics studies.

Today’s most accurate clocks rely on optically trapped ensembles of atoms or ions, such as strontium or ytterbium. They measure time by locking laser light into resonance with the frequencies of specific electronic transitions. The oscillations of the laser then behave like (very high-frequency) pendulum swings. Such clocks can be stable to within one part in 1020, which means after nearly 14 billion years (or the age of the universe), they will be out by just 10 ms.

As well as accurately keeping time, atomic clocks can be used to study fundamental physics phenomena. Nuclear clocks should be even more accurate than their atomic counterparts since they work by probing nuclear energy levels rather than electronic energy levels. They are also less sensitive to external electromagnetic fluctuations that could affect clock accuracy.

Detecting tiny temporal variations

A nucleus measures between 10-14 and 10-15 m across, while an atom is 10-10 m. Shifts between nuclear energy levels are thus higher in energy and would be resonant with a higher-frequency laser. This translates into more wave cycles per second — and can be thought of as a greater number of pendulum swings per second.

Such a nuclear transition probes fundamental particles and interactions differently to existing atomic clocks. Comparing a nuclear clock with a precise atomic clock could therefore help to unearth new discoveries related to very tiny temporal variations, such as those in the values of the fundamental constants of nature. Any detected changes would point to physics beyond the Standard Model.

The problem is that the high-frequency lasers needed to excite the nuclear transitions in most elements are not easy to come by. To excite nuclear transitions, most atomic nuclei need to be hit by high-energy X-rays. In the late 1970s, however, physicists identified thorium-229 as having the smallest energy gap of all atoms and found that it could thus be excited by lower-energy, ultraviolet light. In 2003, Ekkehard Peik and Christian Tamm at the Physikalisch-Technische Bundesanstalt (Germany’s National Metrology Institute), proposed that this transition could be used to make a nuclear clock. But, it was only in 2016 that this transition was directly observed for the first time.

In the new study, an international team led by Jun Ye at JILA, a joint institute of NIST and the University of Colorado Boulder, have fabricated all of the components needed to create a nuclear clock made from thorium-229. These are: a coherent laser for resolving different nuclear states; a “high concentration” thorium-229 sample embedded in a solid-state calcium fluoride host crystal; and a “frequency comb” referenced to an established atomic standard for precisely measuring the frequency of these transitions.

A frequency comb is a special type of laser that acts like a measuring stick for light. It works using laser light that comprises up to 106 equidistant, phase-stable frequencies (which look like the teeth of a comb) to measure other unknown frequencies with high precision and absolute traceability when compared with a radiofrequency standard. The researchers used a frequency comb operating in the infrared part of the spectrum, which they upconverted (through a cavity-enhanced high harmonic generation process) to produce a vacuum-ultraviolet frequency comb whose frequency is linked to the infrared comb. They then used one line in the comb laser to drive the thorium nuclear transition.

Comparisons for fundamental physics studies

And that is not all: the team also succeeded in directly comparing the ultraviolet frequency to the optical frequency employed in one of today’s best atomic clocks made from strontium-87. This last feat will be the starting point for future nuclear–atomic clock comparisons for fundamental physics studies. “For example, we’ll be able to precisely test if some fundamental constants (like the fine structure alpha) are constant or slowly varying over time,” says Chuankun Zhang, a graduate student in Ye’s group.

Looking forward, the researchers say that they eventually hope to use their technology to make portable solid-state nuclear clocks that can be deployed outside the laboratory for practical applications. They also want to investigate how the clock transitions shift depending on temperature and different crystal environments.

“We also plan to develop faster readout schemes of the excited nuclear states for actual clock operation,” Zhang tells Physics World.

The study is detailed in Nature.

The post Nuclear clock ticks ever closer appeared first on Physics World.

]]>
Research update New device could not only be the best time-keeper ever, it could also revolutionize fundamental physics studies https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_XUV_comb.jpg newsletter1
Fluctuations suppress condensation in 1D photon gas  https://physicsworld.com/a/fluctuations-suppress-condensation-in-1d-photon-gas/ Tue, 01 Oct 2024 15:28:54 +0000 https://physicsworld.com/?p=117127 New result backs up an important theory prediction concerning this exotic state of matter

The post Fluctuations suppress condensation in 1D photon gas  appeared first on Physics World.

]]>
The narrower the parabola shape, the more one-dimensionally the gas behaves

By tuning the spatial dimension of an optical quantum gas from 2D to 1D, physicists at Germany’s University of Bonn and University of Kaiserslautern-Landau (RPTU) have discovered that it does not condense suddenly, but instead undergoes a smooth transition. The result backs up an important theory prediction concerning this exotic state of matter, allowing it to be studied in detail for the first time in an optical quantum gas.

Decreasing the number of dimensions from three to two to one dramatically influences the physical behaviour of a system, causing different states of matter to emerge. In recent years, physicists have been using optical quantum gases to study this phenomenon.

In the new study, conducted in the framework of the collaborative research centre OSCAR, a team led by Frank Vewinger of the Institute of Applied Physics (IAP) at the University of Bonn looked at how the behaviour of a photon gas changed as it went from being 2D to 1D. The researchers prepared the 2D gas in an optical microcavity, which is a structure in which light is reflected back and forth between two mirrors. The cavity was filled with dye molecules. As the photons repeatedly interact with the dye, they cool down and the gas eventually condenses into an extended quantum state called a Bose–Einstein condensate.

Parabolic-shaped protrusions

To make the gas 1D, they modified the reflective surface of the optical cavity by laser-printing a transparent polymer nanostructure on top of one of the mirrors. This patterning created parabolic-shaped protrusions that could be elongated and made narrower – and in which the photons could be trapped.

As the gas transitioned between the 2D and 1D structures, Vewinger and colleagues measured its thermal properties as it was allowed to come back to room temperature – by coupling it to a heat bath. Usually, there is a precise temperature at which condensation occurs – think of water freezing at precisely 0°C. The situation is different when a 1D gas instead of a 2D one is created, however, explains Vewinger. “So-called thermal fluctuations take place in photon gases but they are so small in 2D that they have no real impact. However, in 1D these fluctuations can – figuratively speaking – make big waves.”

These fluctuations destroy the order in 1D systems, meaning that different regions within the gas begin to behave differently, he adds. The phase transition therefore becomes more diffuse.

A difficult experiment

The experiment was not an easy one to set up, he says. The main challenge was to adapt the direct laser writing method to create small and steep structures in which to confine the photons so that it worked for the dye-filled microcavity. “We then had to analyse the photons emitted from the microcavity.”

“Our colleagues in Kaiserslautern eventually succeeded in fabricating new tiny polymer structures with high resolution, sticking to our ultra-smooth dielectric cavity mirrors (with a roughness of around 0.5 Å) that were robust to both the chemical solvent in our dye solution and the laser irradiation employed to inject photons into the cavity,” he tells Physics World.

It is often the case in physics that theories and predictions are based on simple toy models, and these models are powerful in building robust theoretical framework, he explains. “But nature is far from simple; it is extremely difficult to build these ideal platforms to test these foundational concepts since real-world systems are usually interacting, driven-dissipative or coupled to some other system. For photon condensates, it is known that they very closely resemble an ideal Bose gas coupled to a heat bath, so we were interested in using this platform to study the effect of the dimension on the phase transition to a Bose–Einstein condensate.”

Looking forward, the researchers say they will now use their novel technique to study more elaborate forms of photon confinement – such as logarithmic or Coulomb-like confinement. They also plan to study photons confined in large lattice structures in which stable vortices can form without particle–particle interactions. “For example, in one-dimensional chains, there are predictions of an exotic zig-zag phase, induced by incoherent hopping between lattice sites,” says Vewinger. “In essence, the structuring opens up a large playground for us in which to study interesting physics.”

The present study is detailed in Nature Physics.

The post Fluctuations suppress condensation in 1D photon gas  appeared first on Physics World.

]]>
Research update New result backs up an important theory prediction concerning this exotic state of matter https://physicsworld.com/wp-content/uploads/2024/10/Low-Res_parabola-art-2dand1d-v11.jpg
Enabling the future: printable sensors for a sustainable, intelligent world https://physicsworld.com/a/enabling-the-future-printable-sensors-for-a-sustainable-intelligent-world/ Tue, 01 Oct 2024 13:25:47 +0000 https://physicsworld.com/?p=116706 Nano Futures explores the cutting-edge science and technology driving the development of next-generation printable sensors

The post Enabling the future: printable sensors for a sustainable, intelligent world appeared first on Physics World.

]]>

Join us for an exciting webinar exploring the cutting-edge science and technology driving the development of next-generation printable sensors. These sensors, made from printable materials using simple and cost-effective methods such as printing and coating, are set to revolutionize a wealth of intelligent and sustainability-focused applications, such as smart cities, e-health, precision agriculture, Industry 4.0, and much more. Their distinct advantages – flexibility, minimal environmental impact, and suitability for high-throughput production– make them a transformative technology across various fields.

Building on the success of the Roadmap on printable electronic materials for next-generation sensors published in Nano Futures, our expert panel will offer a comprehensive overview of advancements in printable materials and devices for next-generation sensors. The webinar will explore how innovations in devices based on various printable materials, including 2D semiconductors, organic semiconductors, perovskites, and carbon nanotubes, are transforming sensor technologies for detecting light, ionizing radiation, pressure, gases, and biological substances.

Join us as we explore the status and recent breakthroughs in printable sensing materials, identify key remaining challenges, and discuss promising solutions, offering valuable insights into the potential of printable materials to enable smarter, more sustainable development.

Meet the esteemed panel of experts:

Vincenzo Pecunia is an associate professor and head of the Sustainable Optoelectronics Research Group at Simon Fraser University. He earned a BSc and MSc in electronics engineering from Politecnico di Milano and a PhD in physics from the University of Cambridge. His research focuses on printable semiconductors for electronics, sensing, and photovoltaics. In recognition of his achievements, he has been awarded the Fellowship of the Institute of Physics, the Fellowship of the Institute of Materials, Minerals & Mining, and the Fellowship of the Institution of Engineering and Technology.

Mark C Hersam is the Walter P Murphy Professor of Materials Science and Engineering, director of the Materials Research Center, and chair of the Materials Science and Engineering Department at Northwestern University (USA). His research interests include nanomaterials, additive manufacturing, nanoelectronics, scanning probe microscopy, renewable energy, and quantum information science. Mark has been repeatedly named a Clarivate Analytics Highly Cited Researcher with more than 700 peer-reviewed publications that have been cited more than 75,000 times.

Oana D Jurchescu is a Baker Professor of physics at Wake Forest University (USA) and a fellow of the Royal Society of Chemistry. She received her PhD in 2006 from University of Groningen (the Netherlands) and was a postdoctoral researcher at the National Institute of Standards and Technology (USA). Her expertise is in charge transport in organic and organic/inorganic hybrid semiconductors, device physics, and semiconductor processing. She has received numerous awards for her research and teaching excellence, including the NSF CAREER Award.

Robert Young is an emeritus professor at the University of Manchester (UK), renowned for his pioneering research on the relationship between the structure and mechanical properties of polymers and composites. His work explores the molecular-level deformation of materials such as carbon fibres, spider silk, carbon-fibre composites, carbon nanotubes, and graphene. Robert has received many prestigious awards. He was elected a fellow of the Royal Society in 2013 and a fellow of the Royal Academy of Engineering in 2006. He has written more than 330 research papers and several textbooks on polymers.

Luisa Petti received her MSc in electronic engineering from Politecnico di Milano (Italy) in 2011. She obtained her PhD in electrical engineering from ETH Zurich (Switzerland) in 2016 with a thesis entitled “Metal oxide semiconductor thin-film transistors for flexible electronics”, for which she won the ETH medal. After a short postdoc at ETH Zurich, she joined first Cambridge Display Technology Ltd in October 2016 and then FlexEnable Ltd in December 2017 in Cambridge, UK, as a scientist. In 2018, she joined the Free University of Bozen-Bolzano, where she is Associate Professor in Electronics since March 2021. Luisa’s current research includes the design, fabrication and characterization of flexible and printable sensors, energy harvesters, and thin-film devices and circuits, with a focus on sustainable and low-cost materials and manufacturing processes.

Aaron D Franklin is the Addy Professor of electrical and computer engineering and associate dean for faculty affairs in the Pratt School of Engineering at Duke University. His research group explores the use of 1D and 2D nanomaterials for high-performance nanoscale devices, low-cost printed and recyclable electronics, and biomedical sensing systems. Aaron is an IEEE Fellow and has published more than 100 scientific papers in the field of nanomaterial-based electronics. He holds more than 50 issued patents and has been engaged in two funded start-ups, one of which was acquired by a Fortune 500 company.

With support from:

The School of Sustainable Energy Engineering (SEE) sits within Simon Fraser University’s Faculty of Applied Sciences. Its research and academic domain involves the development of solutions for the harvesting, storage, transmission and use of energy, with careful consideration of economic, environmental, societal and cultural implications.

About this journal

Nano Futures is a multidisciplinary, high-impact journal publishing fundamental and applied research at the forefront of nanoscience and technological innovation.

Editor-in-chief: Amanda Barnard, senior professor of computational science and the deputy director of the School of Computing at the Australian National University

 

The post Enabling the future: printable sensors for a sustainable, intelligent world appeared first on Physics World.

]]>
Webinar Nano Futures explores the cutting-edge science and technology driving the development of next-generation printable sensors https://physicsworld.com/wp-content/uploads/2024/09/Printed-sensors-scaled.jpg
Rotating cylinder amplifies electromagnetic fields https://physicsworld.com/a/rotating-cylinder-amplifies-electromagnetic-fields/ Tue, 01 Oct 2024 08:30:07 +0000 https://physicsworld.com/?p=117078 The Zel'dovich effect is observed in an electromagnetic system for the first time

The post Rotating cylinder amplifies electromagnetic fields appeared first on Physics World.

]]>
Physicists have observed the Zel’dovich effect in an electromagnetic system – something that was thought to be incredibly difficult to do until now. This observation, in a simplified induction generator, suggests that the effect could in fact be quite fundamental in nature.

In 1971, the Russian physicist Yakov Zel’dovich predicted that electromagnetic waves scattered by a rotating metallic cylinder should be amplified by gaining mechanical rotational energy from the cylinder. The effect, explains Marion Cromb of the University of Southampton, works as follows: waves with angular momentum – or twist – that would usually be absorbed by an object, instead become amplified by that object. However, this amplification only occurs if a specific condition is met: namely, that the object is rotating at an angular velocity that’s higher than the frequency of the incoming waves divided by the wave angular momentum number. In this specific electromagnetic experiment, this number was 1, due to spin angular momentum, but it can be larger.

In previous work, Cromb and colleagues tested this theory in sound waves, but until now, it had never been proven with electromagnetic waves.

Spin component is amplified

In their new experiments, which are detailed in Nature Communications, the researchers used a gapped inductor to induce a magnetic field that oscillates at an AC frequency around a smooth cylinder made of aluminium. The gapped inductor comprises an AC current-carrying wire coiled around an iron ring with a gap in it. “This oscillating field is an easy way to create the sum of two spinning fields in opposite directions,” explains Cromb. “When the cylinder rotates faster than the field frequency, it thus amplifies the spin component rotating in the same direction.”

The cylinder acts as a resistor in the circuit when it is not moving, but as it rotates, its resistance decreases. As the rotation speed increases, after the Zel’dovich condition has been met, the resistance becomes negative. “We measured the power in the circuit at different rotation speeds and observed that it was indeed amplified once the cylinder span fast enough,” says Cromb.

Until now, it was thought that observing the Zel’dovich effect in an electromagnetic system would not be possible. This was because, in Zel’dovich’s predictions, the condition for amplification (while simple in description), would only be possible if the cylinder was rotating at speeds close to the speed of light. “Any slower, and the effect would be too small to be seen,” Cromb adds.

Once they had demonstrated the Zel’dovich effect with sound waves, the Southampton University scientists – together with their theory colleagues at the University of Glasgow and IFN Trento – realized that they could overcome some of the limitations of Zel’dovich’s example while still testing the amplification condition. “The actual experimental set-up is surprisingly simple,” Cromb tells Physics World.

Observing the effect on a quantum level?

Knowing that this effect is present in different physical systems, both in acoustics and now in electromagnetic circuits, suggests that it is quite fundamental in nature, Cromb says. And seeing it in an electromagnetic system means that the team might now be able to observe the effect on a quantum level. “This would be a fascinating test of how quantum mechanics, thermodynamics and (rotational) motion all work together.”

Looking forward, the researchers will now attempt to improve their experimental set-up. At present, it relies on an oscillating magnetic field that contains equal co-rotating and counter-rotating spin components. Only one of these should be Zel’dovich-amplified by the rotating cylinder (the co-rotating component) while the other is only ever absorbed, explains Cromb. “Ideally, we want to switch to a rotating magnetic field so we can confirm that it is only when the field and cylinder rotate in the same direction that the amplification occurs. This would mean that the whole field can be amplified and not just part of it.”

The team has already made some progress in this direction by switching to using a cylindrical stator (the stationary part), not just because it can create such a rotating magnetic field, but also because it fits snugly around the cylinder and thus interacts more strongly with it. This should increase the size of the Zel’dovich effect so it can be more easily measured.

“We hope that these improvements will help us also show a situation akin to a ‘black hole bomb’ where the Zel’dovich amplification gets reflected back efficiently enough to create a positive feedback loop, and the power in the circuit skyrockets exponentially,” says Cromb.

The post Rotating cylinder amplifies electromagnetic fields appeared first on Physics World.

]]>
Research update The Zel'dovich effect is observed in an electromagnetic system for the first time https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_Zeldovich-experiment-equipment.jpeg newsletter1
Structural battery is world’s strongest, say researchers https://physicsworld.com/a/structural-battery-is-worlds-strongest-say-researchers/ Mon, 30 Sep 2024 15:34:56 +0000 https://physicsworld.com/?p=117105 Carbon fibre-based electrodes are key to success

The post Structural battery is world’s strongest, say researchers appeared first on Physics World.

]]>
A prototype described as the world’s strongest functional structural battery has been unveiled by researchers in Sweden. The device has an elastic modulus that is much higher than any previous design and was developed by Leif Asp and his colleagues at Chalmers University of Technology. The battery could be an important step towards lighter and more space-efficient electric vehicles (EVs).

Structural batteries are an emerging technology that store electrical energy while also bearing mechanical loads. They could be especially useful in EVs, where the extra weight and volume associated with batteries could be minimized by incorporating the batteries into a vehicle’s structural components.

In 2018, Asp’s team made a promising step towards practical structural batteries – and was rewarded with a mention in Physics World‘s Top ten breakthroughs of 2018. That year, the team showed how a trade-off could be reached between the mechanical strength of highly ordered carbon fibres and the desired electrochemical properties of less-ordered structures.

Building on this, Asp and colleagues unveiled their first-generation structural battery in 2021. “Here, we used carbon fibres as the negative electrode but a commercial lithium iron phosphate (LFP) on an aluminium foil as a positive electrode, and impregnated it with the resin by hand,” Asp recalls.

Solid–liquid electrolyte

This involved using a biphasic solid–liquid electrolyte, with the liquid phase transporting ions between the electrodes and the solid phase providing mechanical structure through its stiffness. The battery offered a gravimetric energy density of 24 Wh/kg. This much lower than the conventional batteries currently used in EVs – which deliver about 250 Wh/kg.

By 2023, Asp’s team had improved on this approach with a second-generation structural battery that used the same constituents, but employed an improved manufacturing method. This time, the team used an infusion technique to ensure the resin was distributed more evenly throughout the carbon fibre network.

In this incarnation, the team enhanced the battery’s negative electrode by using ultra-thin spread tow carbon fibre, where the fibres are spread into thin sheets. This approach improved both the mechanical strength and the electrical conductivity of the battery. At that stage, however, the mechanical strength of the battery was still limited by the LFP positive electrode.

Now, the team has addressed this challenge by using a carbon fibre-based positive electrode. Asp says, “This is the third generation, and is the first all-fibre structural battery, as has always been desired. Using carbon fibres in both electrodes, we could boost the battery’s elastic modulus, without suffering from reduced energy density.”

To achieve this, the researchers coated the surface of the carbon fibres with a layer of LFP using electrophoretic deposition. This is a technique whereby charged particles suspended in a liquid are deposited onto substrates using electric fields. Additionally, the team used a thin cellulose separator to further enhance the battery’s energy density.

All of these components were then embedded in the battery’s structural electrolyte and cured in resin, using the same infusion technique developed for the second-generation battery.

Stronger and denser

The latest improvements delivered a battery with an energy density of 30 Wh/kg and an elastic modulus greater than 76 GPa when tested in a direction parallel to the carbon fibres. This makes it by far the strongest structural battery reported to date, exceeding the team’s previous record of 25 GPa and making the battery stiffer than aluminium. Alongside its good mechanical performance, the battery also demonstrated nearly 100% efficiency in storing and releasing charge, even after 1000 cycles of charging and discharging.

Building on this success, the team now aims to further enhance the battery’s performance. “We are now working on small modifications to the current design,” Asp says. “We expect to be able to make structural battery cells with an elastic modulus exceeding 100 GPa and an energy density exceeding 50 Wh/kg.”

This ongoing work could pave the way for even stronger and more efficient structural batteries, which could have a transformative impact on the design and performance of EVs in the not-too-distant future. It could also help reduce the weight of laptop computers, aeroplanes and ships.

The research is described in Advanced Materials.

The post Structural battery is world’s strongest, say researchers appeared first on Physics World.

]]>
Research update Carbon fibre-based electrodes are key to success https://physicsworld.com/wp-content/uploads/2024/09/30-9-2024-strong-battery.jpg
Nickel langbeinite might be a new quantum spin liquid candidate https://physicsworld.com/a/nickel-langbeinite-might-be-a-new-quantum-spin-liquid-candidate/ Mon, 30 Sep 2024 13:00:58 +0000 https://physicsworld.com/?p=117000 The phase diagram of this new material contains a "centre of liquidity"

The post Nickel langbeinite might be a new quantum spin liquid candidate appeared first on Physics World.

]]>
A nickel-based material belonging to the langbeinite family could be a new three-dimensional quantum spin liquid candidate, according to new experiments at the ISIS Neutron and Muon Source in the UK. The work, performed by researchers from the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, the Helmholtz-Zentrum Berlin (HZB) in Germany and Okayama University in Japan, is at the fundamental research stage for the moment.

Quantum spin liquids (QSLs) are magnetic materials that cannot arrange their magnetic moments (or spins) into a regular and stable pattern. This “frustrated” behaviour is very different from that of ordinary ferromagnets or antiferromagnets, which have spins that point in the same or alternating directions, respectively. Instead, the spins in QSLs constantly change direction as if they were in a fluid, producing an entangled ensemble of spin-ups and spin-downs even at ultracold temperatures, where the spins of most materials freeze solid.

So far, only a few real-world QSL materials have been observed, mostly in quasi-one-dimensional chain-like magnets and a handful of two-dimensional materials. The new candidate material – K2Ni2(SO4)3 – is a langbeinite, a family of sulphate minerals rarely found in nature whose chemical compositions can be changed by replacing one or two of the elements in the compound. K2Ni2(SO4)3 is composed of a three-dimensional network of corner-sharing triangles forming two trillium lattices made from the nickel ions. The magnetic network of langbeinite shares some similarities with the QSL pyrochlore lattice, which researchers have been studying for the last 30 years, but is also quite different in many ways.

A strongly correlated ground state at up to 20 K

The researchers, led by Ivica Živković at the EPFL, fabricated the new material especially for their study. In their previous work, which was practically the first investigation of the magnetic properties of langbeinites, they showed that the compound has a strongly correlated ground state at temperatures of up to at least 20 K.

In their latest work, they used a technique called inelastic neutron scattering, which can measure magnetic excitations, at the ISIS Neutron and Muon Source of the STFC Rutherford Appleton Laboratory to directly observe this correlation.

Theoretical calculations by Okayama University’s Harald Jeschke, which included density functional theory-based energy mappings, and classical Monte Carlo and pseudo-fermion functional renormalization group (PFFRG) calculations, performed by Johannes Reuther at the HZB to model the behaviour of K2Ni2(SO4)3, agreed exceptionally well with the experimental measurements. In particular, the phase diagram of the material revealed a “centre of liquidity” that corresponds to the trillium lattice in which each triangle is turned into a tetrahedron.

Particular set of interactions supports spin-liquid behaviour

The researchers say that they undertook the new study to better understand why the ground state of this material was so dynamic. Once they had performed their theoretical calculations and could model the material’s behaviour, the challenge was to identify the type of geometric frustration that was at play. “K2Ni2(SO4)3 is described by five magnetic interactions (J1, J2, J3, J4 and J5), but the highly frustrated tetra-trillium lattice has only one non-zero J,” explains Živković. “It took us some time to first find this particular set of interactions and then to prove that it supports spin-liquid behaviour.”

Now that we know where the highly frustrated behaviour comes from, the question is whether some exotic quasiparticles can be associated with this new spin arrangement, he tells Physics World.

Živković says the research, which is detailed in Nature Communications, remains in the realm of fundamental research for the moment and that it is too early to talk about any real-world applications.

The post Nickel langbeinite might be a new quantum spin liquid candidate appeared first on Physics World.

]]>
Research update The phase diagram of this new material contains a "centre of liquidity" https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_16_9.jpg
Metasurface-enhanced camera performs hyperspectral and polarimetric imaging https://physicsworld.com/a/metasurface-enhanced-camera-performs-hyperspectral-and-polarimetric-imaging/ Mon, 30 Sep 2024 08:30:37 +0000 https://physicsworld.com/?p=117061 Inexpensive metasurface could revolutionize the capabilities of conventional imaging systems

The post Metasurface-enhanced camera performs hyperspectral and polarimetric imaging appeared first on Physics World.

]]>
A team of US-based researchers has developed an inexpensive and ultrathin metasurface that, when paired with a neural network, enables a conventional camera to capture detailed hyperspectral and polarization data from a single snapshot. The innovation could pave the way for significant advances in medical diagnostics, environmental monitoring, remote sensing and even consumer electronics.

The research team, based at Pennsylvania State University, designed a large set of silicon-based meta-atoms with unique spectral and polarization responses. When spatially arranged within small “superpixels” these meta-atoms are capable of encoding both spectral and polarization information into distinct patterns that traditional cameras cannot detect. To recover this information into a format understandable by humans, the team uses machine learning algorithms to recognize these patterns and map them back to their corresponding encoded information.

“A normal camera typically captures only the intensity distribution of light and is insensitive to its spectral and polarization properties. Our metasurface consists of numerous distinct meta-atoms, each designed to exhibit different transmission characteristics for various incoming spectra and polarization states,” explains lead corresponding author Xingjie Ni.

“The metasurface consists of many such superpixels; the patterns generated by these superpixels are then captured by a conventional camera sensor,” he adds. “Essentially, the metasurface translates information that is normally invisible to the camera into a format it can detect. Each superpixel corresponds to one pixel in the final image, allowing us to obtain not only intensity information but also the spectrum and polarization data for each pixel.”

Widespread applications

In terms of potential applications, Ni pictures the technology enabling the development of miniaturized and portable hyperspectro-polarimetry imaging systems, which he believes could revolutionize the abilities of existing imaging systems. “For instance, we might develop a small add-on for smartphone cameras to enhance their capabilities, allowing users to capture rich spectral and polarization information that was previously inaccessible in such a compact form,” he says.

According to Ni, traditional hyperspectral and polarimetric cameras, which often are bulky and expensive to produce, capture either spectral or polarization data, but not both simultaneously. Such systems are also limited in resolution, not easily integrated into compact devices, and typically require complex alignment and calibration.

In contrast, the team’s metasurface encoder is ultracompact, lightweight and cost-effective. “By integrating it directly onto a conventional camera sensor, we eliminate the need for additional bulky components, reducing the overall size and complexity of the system,” says Ni.

Ni also observes that the metasurface’s ability to encode spectral and polarization information into intensity patterns enables simultaneous hyperspectral and polarization imaging without significant modifications to existing imaging systems. Moreover, the flexibility in designing the meta-atoms enables the team to achieve high-resolution and high-sensitivity detection of spectral and polarization variations.

“This level of customization and integration is difficult to attain with traditional optical systems. Our approach also reduces data redundancy and improves imaging speed, which is crucial for applications in dynamic, high-speed environments,” he says.

Moving forward, Ni confirms that he and his team have applied for a patent to protect the technology and facilitate its commercialization. They are now working on robust integration techniques and exploring ways to further reduce manufacturing costs by utilizing photolithography for large-scale production of the metasurfaces, which should make the technology more accessible for widespread applications.

“In addition, the concept of a light ‘encoder’ is versatile and can be extended to other aspects of light beyond spectral and polarization information,” says Ni.

“Our group is actively developing different metasurface encoders designed to capture the phase and temporal information of the light field,” he tells Physics World. “This could open up new possibilities in fields like optical computing, telecommunications and advanced imaging systems. We are excited about the potential impact of this technology and are committed to advancing it further.”

The results of the research are presented in Science Advances.

The post Metasurface-enhanced camera performs hyperspectral and polarimetric imaging appeared first on Physics World.

]]>
Research update Inexpensive metasurface could revolutionize the capabilities of conventional imaging systems https://physicsworld.com/wp-content/uploads/2024/09/30-09-24-xingjie-ni-Zhiwen-Liu.jpg newsletter1
Physicists reveal the mechanics of tea scum https://physicsworld.com/a/physicists-reveal-the-mechanics-of-tea-scum/ Sat, 28 Sep 2024 09:00:42 +0000 https://physicsworld.com/?p=117065 Researchers have looked at how tea scum breaks apart when stirred

The post Physicists reveal the mechanics of tea scum appeared first on Physics World.

]]>
If you have ever brewed a cup of black tea with hard water you will be familiar with the oily film that can form on the surface of the tea after just a few minutes.

Known as “tea scum” the film consists of calcium carbonate crystals within an organic matrix. Yet it can be easily broken apart with a quick stir of a teaspoon.

Physicists in France and the UK have now examined how this film forms and also what happens when it breaks apart through stirring.

They did so by first sprinkling graphite powder into a water tank. Thanks to capillary forces, the particles gradually clump together to form rafts. The researchers then generated waves in the tank that broke apart the rafts and filmed the process with a camera.

Through these experiments and theoretical modelling, they found that the rafts break up when diagonal cracks form at the raft’s centre. This causes them to fracture into larger chunks before the waves eventually eroded them away.

They found that the polygonal shapes created when the rafts split up is the same as that seen in ice floes.

Despite the visual similarities, however, sea ice and tea scum break up through different physical mechanisms. While ice is brittle, bending and snapping under the weight of crushing waves, the graphite rafts come apart when the viscous stress exerted by the waves overcome the capillary forces that hold the individual particles together.

Buoyed by their findings, the researchers now plan to use their model to explain the behaviour of other thin biofilms, such as pond scum.

The post Physicists reveal the mechanics of tea scum appeared first on Physics World.

]]>
Blog Researchers have looked at how tea scum breaks apart when stirred https://physicsworld.com/wp-content/uploads/2024/09/tea-scum-27-09-2014.jpg
Positronium gas is laser-cooled to one degree above absolute zero https://physicsworld.com/a/positronium-gas-is-laser-cooled-to-one-degree-above-absolute-zero/ Fri, 27 Sep 2024 13:42:03 +0000 https://physicsworld.com/?p=117071 New cooling technique could help reveal physics beyond the Standard Model

The post Positronium gas is laser-cooled to one degree above absolute zero appeared first on Physics World.

]]>
29-09-2024 positron cooling

Researchers at the University of Tokyo have published a paper in the journal Nature that describes a new laser technique that is capable of cooling a gas of positronium atoms to temperatures as low as 1 K. Written by Kosuke Yoshioka and colleagues at the University of Tokyo, the paper follows on from a publication earlier this year from the AEgIS team at CERN, who described how a different laser technique was used to cool positronium to 170 K.

Positronium comprises a single electron bound to its antimatter counterpart, the positron. Although electrons and positrons will ultimately annihilate each other, they can briefly bind together to form an exotic atom. Electrons and positrons are fundamental particles that are nearly point like, so positronium provides a very simple atomic system for experimental study. Indeed, this simplicity means that precision studies of positronium could reveal new physics beyond the Standard Model.

Quantum electrodynamics

One area of interest is the precise measurement of the energy required to excite positronium from its ground state to its first excited state. Such measurements could enable more rigorous experimental tests of quantum electrodynamics (QED). While QED has been confirmed to extraordinary precision, any tiny deviations could reveal new physics.

An important barrier to making precision measurements is the inherent motion of positronium atoms. “This large randomness of motion in positronium is caused by its short lifetime of 142 ns, combined with its small mass − 1000 times lighter than a hydrogen atom,” Yoshioka explains. “This makes precise studies challenging.”

In 1988, two researchers at Lawrence Livermore National Laboratory in the US published a theoretical exploration of how the challenge could be overcome by using laser cooling to slow positronium atoms to very low speeds. Laser cooling is routinely used to cool conventional atoms and involves having the atoms absorb photons and then re-emitting the photons in random directions.

Chirped pulse train

Building on this early work, Yoshioka’s team has developed new laser system that is ideal for cooling positronium. Yoshioka explains that in the Tokyo setup, “the laser emits a chirped pulse train, with the frequency increasing at 500 GHz/μs, and lasting 100 ns. Unlike previous demonstrations, our approach is optimized to cool positronium to ultralow velocities.”

In a chirped pulse, the frequency of the laser light increases over the duration of the pulse. It allows the cooling system to respond to the slowing of the atoms by keeping the photon absorption on resonance.

Using this technique, Yoshioka’s team successfully cooled positronium atoms to temperatures around 1 K, all within just 100 ns. “This temperature is significantly lower than previously achieved, and simulations suggested that an even lower temperature in the 10 mK regime could be realized via a coherent mechanism,” Yoshioka says. Although the team’s current approach is still some distance from achieving this “recoil limit” temperature, the success of their initial demonstration has given them confidence that further improvements could bring them closer to this goal.

“This breakthrough could potentially lead to stringent tests of particle physics theories and investigations into matter-antimatter asymmetry,” Yoshioka predicts. “That might allow us to uncover major mysteries in physics, such as the reason why antimatter is almost absent in our universe.”

The post Positronium gas is laser-cooled to one degree above absolute zero appeared first on Physics World.

]]>
Research update New cooling technique could help reveal physics beyond the Standard Model https://physicsworld.com/wp-content/uploads/2024/09/29-09-2024-positron-cooling-cropped.jpg newsletter1
Ask me anything: Fatima Gunning – ‘Thinking outside the box is a winner when it comes to problem solving’ https://physicsworld.com/a/ask-me-anything-fatima-gunning-thinking-outside-the-box-is-a-winner-when-it-comes-to-problem-solving/ Fri, 27 Sep 2024 13:00:18 +0000 https://physicsworld.com/?p=116926 Physicist Fatima Gunning explains how mentorship has helped her grow as a researcher and teacher

The post Ask me anything: Fatima Gunning – ‘Thinking outside the box is a winner when it comes to problem solving’ appeared first on Physics World.

]]>
What skills do you use every day in your job?

I am fortunate to have several different roles, and problem-solving is a skill I use in each. As physicists, we’re constantly solving problems in different ways, and, as researchers, we are always trying to question the unknown. To understand the physical world more, we need to be curious and willing to reformulate our questions when they are challenged.

Researchers need to keep asking ‘Why?’ Trying to understand a problem or challenge – listening and considering other views – is essential.

In everyday work such as administration, research, teaching and mentoring, I also find that thinking outside the box is a winner when it comes to problem solving. I try not to just go along with whatever the team or the group is thinking. Instead, I try to consider different points of view. Researchers need to keep asking ‘Why?’ Trying to understand a problem or challenge – listening and considering other views – is essential.

Another critical skill I use is communication. In my work, I need to be able to listen, speak and write a lot. It could be to convey why our research is important and why it should be funded. It could be to craft new policies, mediate conflict or share research findings clearly with colleagues, students, managers and members of the public. So communication is definitely key.

What do you like best and least about your job?

I graduated about 30 years ago and, during that time, the things I like best or least have never stayed the same. At the moment, the best part of my job is working with research students – not just at master’s and PhD level, but final-year undergraduates who might be getting hands-on experience in a lab for the first time. There’s great satisfaction and a sense of “job well done” whenever I demonstrate a concept they’ve known for several years but have never “seen” in action. When they shout “Ah, I get it!”, it’s a great feeling. It’s also really rewarding to receive similar reactions from my education and public engagement work, such as when I visit primary and secondary schools.

At the moment, my least favourite part of my job is the lack of time. I’m not very good at time management, and I find it hard to say “no” to people in need, especially if I know how to help them. It’s difficult to juggle work, mentoring, volunteering activities and home life. During the COVID-19 pandemic, I realized that taking time off to pursue a hobby is vital – not only for my wellbeing but also to give me clarity in decision making.

What do you know today that you wish you knew when you were starting out in your career?

I wish I had realized the important of mentorship sooner. Throughout my career, I’ve had people who’ve supported me along the way. It might just have been a brief conversation in the corridor, help with a grant application or a serendipitous chat at a conference, although at other times it might have been through in-depth discussion of my work. I only started to regard the help as “mentorship” when I did a leadership course that included mentor/mentee training. Looking back, those encounters really boosted my confidence and helped me make rational choices.

There are so many opportunities to meet people in your field and people are always happy to share their experiences

Once you realize what mentors can do, you can plan to speak to people strategically. These conversations can help you make decisions and introduce you to new contacts. They can also help you understand what career paths are available – it’s okay to take your time to explore career options or even to change direction. Students and young professionals should also engage with professional societies, such as the Institute of Physics. There are so many opportunities to meet people in your field and people are always happy to share their experiences. We need to come out of our “shy” shells and talk to people, no matter how senior and famous they are. That’s certainly the message I’d have given myself 30 years ago.

The post Ask me anything: Fatima Gunning – ‘Thinking outside the box is a winner when it comes to problem solving’ appeared first on Physics World.

]]>
Interview Physicist Fatima Gunning explains how mentorship has helped her grow as a researcher and teacher https://physicsworld.com/wp-content/uploads/2024/09/2024-09-20-AMA-Fatima-Gunning.jpg newsletter
Knowledge grows step-by-step despite the exponential growth of papers, finds study https://physicsworld.com/a/knowledge-grows-step-by-step-despite-the-exponential-growth-of-papers-finds-study/ Fri, 27 Sep 2024 12:02:16 +0000 https://physicsworld.com/?p=117039 The authors believe the finding indicates a decline in scientific productivity

The post Knowledge grows step-by-step despite the exponential growth of papers, finds study appeared first on Physics World.

]]>
Scientific knowledge is growing at a linear rate despite an exponential increase in publications. That’s according to a study by physicists in China and the US, who say their finding points to a decline in overall scientific productivity. The study therefore contradicts the notion that productivity and knowledge grow hand in hand – but adds weight to the view that the rate of scientific discovery may be slowing or that “information fatigue” and the vast number of papers can drown out new discoveries.

Defining knowledge is complex, but it can be thought of as a network of interconnected beliefs and information. To measure it, the authors previously created a knowledge quantification index (KQI). This tool uses various scientific impact metrics to examine the network structures created by publications and their citations and quantifies how well publications reduce the uncertainty of the network, and thus knowledge.

The researchers claim the tool’s effectiveness has been validated through multiple approaches, including analysing the impact of work by Nobel laureates.

In the latest study, published on arXiv, the team analysed 213 million scientific papers, published between 1800 and 2020, as well as 7.6 million patents filed between 1976 and 2020. Using the data, they built annual snapshots of citation networks, which they then scrutinised with the KQI to observe changes in knowledge over time.

The researchers – based at Shanghai Jiao Tong University in Shanghai, the University of Minnesota in the US and the Institute of Geographic Sciences and Natural Resources Research in Beijing –found that while the number of publications has been increasing exponentially, knowledge has not.

Instead, their KQI suggests that knowledge has been growing in a linear fashion. Different scientific disciplines do display varying rates of knowledge growth, but they all have the same linear growth pattern. Patent growth was found to be much slower than publication growth but also shows the linear growth in the KQI.

According to the authors, the analysis indicates “no significant change in the rate of human knowledge acquisition”, suggesting that our understanding of the world has been progressing at a steady pace.

If scientific productivity is defined as the number of papers required to grow knowledge, this signals a significant decline in productivity, the authors claim.

The analysis also revealed inflection points associated with new discoveries, major breakthroughs and other important developments, with knowledge growing at different linear rates before and after.

Such inflection points create the illusion of exponential knowledge growth due to the sudden alteration in growth rates, which may, according to the study authors, have led previous studies to conclude that knowledge is growing exponentially.

Research focus

“Research has shown that the disruptiveness of individual publications – a rough indicator of knowledge growth – has been declining over recent decades,” says Xiangyi Meng, a physicist at Northwestern University in the US, who works in network science but was not involved in the research. “This suggests that the rate of knowledge growth must be slower than the exponential rise in the number of publications.”

Meng adds, however, that the linear growth finding is “surprising” and “somewhat pessimistic” – and that further analysis is needed to confirm if knowledge growth is indeed linear or whether it “more likely, follows a near-linear polynomial pattern, considering that human civilization is accelerating on a much larger scale”.

Due to the significant variation in the quality of scientific publications, Meng says that article growth may “not be a reliable denominator for measuring scientific efficiency”. Instead, he suggests that analysing research funding and how it is allocated and evolves over time might be a better focus.

The post Knowledge grows step-by-step despite the exponential growth of papers, finds study appeared first on Physics World.

]]>
News The authors believe the finding indicates a decline in scientific productivity https://physicsworld.com/wp-content/uploads/2024/09/formulae-web-131120717-Shutterstock_agsandrew.jpg newsletter
Genetically engineered bacteria solve computational problems https://physicsworld.com/a/genetically-engineered-bacteria-solve-computational-problems/ Fri, 27 Sep 2024 08:00:38 +0000 https://physicsworld.com/?p=116977 A cell-based biocomputer can identify prime numbers, recognize vowels and answer mathematical questions

The post Genetically engineered bacteria solve computational problems appeared first on Physics World.

]]>
Cell-based biocomputing is a novel technique that uses cellular processes to perform computations. Such micron-scale biocomputers could overcome many of the energy, cost and technological limitations of conventional microprocessor-based computers, but the technology is still very much in its infancy. One of the key challenges is the creation of cell-based systems that can solve complex computational problems.

Now a research team from the Saha Institute of Nuclear Physics in India has used genetically modified bacteria to create a cell-based biocomputer with problem-solving capabilities. The researchers created 14 engineered bacterial cells, each of which functioned as a modular and configurable system. They demonstrated that by mixing and matching appropriate modules, the resulting multicellular system could solve nine yes/no computational decision problems and one optimization problem.

The cellular system, described in Nature Chemical Biology, can identify prime numbers, check whether a given letter is a vowel, and even determine the maximum number of pizza or pie slices obtained from a specific number of straight cuts. Here, senior author Sangram Bagh explains the study’s aims and findings.

How does cell-based computing work?

Living cells use computation to carry out biological tasks. For instance, our brain’s neurons communicate and compute to make decisions; and in the event of an external attack, our immune cells collaborate, compute and make judgements. The development of synthetic biology opens up new avenues for engineering live cells to carry out human-designed computation.

The fusion of biology and computer science has resulted in the development of living cell-based biocomputers to solve computational problems. Here, living cells are engineered to use as circuits and components to build biocomputers. Lately, researchers have been manipulating living cells to find solutions for maze and graph colouring puzzles.

Why did you employ bacteria to perform the computations?

Bacteria are single-cell organisms, 2–5 µm in size, with fast replication times (about 30 min). They can survive in many conditions and require minimum energy, thus they provide an ideal chassis for building micron-scale computer technology. We chose to use Escherichia coli, as it has been studied in detail and is easy to manipulate, making it a logical choice to build a biocomputer.

How did you engineer the bacteria to solve problems?

We built synthetic gene regulatory networks in bacteria in such a way that each bacterium worked as an artificial neuro-synapse. In this way, 14 genetically engineered bacteria were created, each acting like an artificial neuron, which we named “bactoneurons”. When these bactoneurons are mixed in a liquid culture in a test tube, they create an artificial neural network that can solve computational problems. The “LEGO-like” system incorporates 14 engineered cells (the “LEGO blocks”) that you can mix and match to build one of 12 specific problem solvers on demand.

How do the bacteria report their answers?

We pose problems to the bacteria in a chemical space using a binary system. The bacteria were questioned by adding (“one”) or not adding (“zero”) four specific chemicals. The bacterial artificial neural network analysed the data and responded by producing different fluorescent proteins. For example, when we asked if three is a prime number, in response to this question, the bacteria glowed green to print “yes”. Similarly, when we asked if four was a prime number, the bacteria glowed red and said “no”.

How could such a biocomputer be used in real-world applications?

Bacteria are tiny organisms, about one-twentieth the diameter of a human hair. It is not possible to make a silicon computer so small. Making such a small computer with bacteria will open a new horizon in microscale computer technology. Its use will extend from new medical technology and material technology to space technology.

For example, one may imagine a set of engineered bacteria or other cells within the human body taking decisions and acting upon a particular disease state, based on multiple biochemical and physiological cues.

Scientists have proposed using synthetically engineered organisms to help in situ resource utilization to build a human research base on Mars. However, it may not be possible to instruct each of the organisms remotely to perform a specific task based on local conditions. Now, one can imagine the tiny engineered organisms working as a biocomputer, interacting with each other, and taking autonomous decisions on action without any human intervention.

The importance of this work in basic science is also immense. We know that recognizing prime numbers or vowels can only be done by humans or computers – but now genetically engineered bacteria are doing the same. Such observations raise new questions about the meaning of “intelligence” and offer some insight on the biochemical nature and the origin of intelligence.

What are you planning to do next?

We would like to build more complex biocomputers to perform more complex computation tasks with multitasking capability. The ultimate goal is to build artificially intelligent bacteria.

The post Genetically engineered bacteria solve computational problems appeared first on Physics World.

]]>
Research update A cell-based biocomputer can identify prime numbers, recognize vowels and answer mathematical questions https://physicsworld.com/wp-content/uploads/2024/09/27-09-24-Bacterial-computation-SB-Graphical_Abstract.jpg newsletter1
Field work – the physics of sheep, from phase transitions to collective motion https://physicsworld.com/a/field-work-the-physics-of-sheep-from-phase-transitions-to-collective-motion/ Thu, 26 Sep 2024 12:23:48 +0000 https://physicsworld.com/?p=116797 Physics sheds a new insight on the behaviour of sheep flocks, helping with new tips on shepherding

The post Field work – the physics of sheep, from phase transitions to collective motion appeared first on Physics World.

]]>
You’re probably familiar with the old joke about a physicist who, when asked to use science to help a dairy farmer, begins by approximating a spherical cow in a vacuum. But maybe it’s time to challenge this satire on how physics-based models can absurdly over-simplify systems as complex as farm animals. Sure, if you want to understand how a cow or a sheep works, approximating those creatures as spheres might not be such a good idea. But if you want to understand a herd or a flock, you can learn a lot by reducing individual animals to mere particles – if not spheres, then at least ovoids (or bovoids; see what I did there?).

By taking that approach, researchers over the past few years have not only shed new insight on the behaviour of sheep flocks but also begun to explain how shepherds do what they do – and might even be able to offer them new tips about controlling their flocks. Welcome to the emerging science of sheep physics.

“Boids” of a feather

Physics-based models of the group dynamics of living organisms go back a long way. In 1987 Craig Reynolds, a software engineer with the California-based computer company Symbolics, wrote an algorithm to try to mimic the flocking of birds. By watching blackbirds flock in a local cemetery, Reynolds intuited that each bird responds to the motions of its immediate neighbours according to some simple rules.

His simulated birds, which he called “boids” (a fusion of bird and droid), would each match their speed and orientation to those of others nearby, and would avoid collisions as if there was a repulsive force between them. Those rules alone were enough to generate group movements resembling the striking flocks or “murmurations” of real-life blackbirds and starlings, that swoop and fly together in seemingly perfect unison. Reynolds’ algorithms were adapted for film animations such as the herd of wildebeest in The Lion King.

Murmuration of starlings

Over the next two or three decades, these models were modified and extended by other researchers, including the future Nobel-prize-winning physicist Giorgio Parisi, to study collective motions of organisms ranging from birds to schooling fish and swarming bacteria. Those studies fed into the emerging science of active matter, in which particles – which could be simple colloids – move under their own propulsion. In the late 1990s physicist Tamás Vicsek and his student Andras Czirók, at Eötvös University in Budapest, revealed analogies between the collective movements of such self-propelled particles and the reorientation of magnetic spins in regular arrays, which also “feel” and respond to what their neighbours are doing (Phys. Rev. Lett. 82 209; J. Phys. A: Math. Gen. 30 1375).

In particular, the group motion can undergo abrupt phase transitions – global shifts in the pattern of behaviour, analogous to how matter can switch to a bulk magnetized state – as the factors governing individual motion, such as average velocity and strength of interactions, are varied. In this way, the collective movements can be summarized in phase diagrams, like those depicting the gaseous, liquid and solid states of matter as variables such as temperature and density are changed.

Models like these have now been used to explore the dynamics not just of animals and bacteria, but also of road traffic and human pedestrians. They can predict the kinds of complex behaviours seen in the real world, such as stop-and-start waves in traffic congestion or the switch to a crowd panic state. And yet the way they represent the individual agents seems – for humans anyway – almost insultingly simple, as if we are nothing but featureless particles propelled by blind forces.

Follow the leader

If these models work for humans, you might imagine they’d be fine for sheep too – which, let’s face it, seem behaviourally and psychologically rather unsophisticated compared with us. But if that’s how you think of sheep, you’ve probably never had to shepherd them. Sheep are decidedly idiosyncratic particles.

“Why should birds, fish or sheep behave like magnetic spins?” asks Fernando Peruani of the University of Cergy Paris. “As physicists we may want that, but animals may have a different opinion.” To understand how flocks of sheep actually behave, Peruani and his colleagues first looked at the available data, and then tried to work out how to describe and explain the behaviours that they saw.

1 Are sheep like magnetic spins?

Sheep walking in a line

In a magnetic material, magnetic spins interact to promote their mutual alignment (or anti-alignment, depending on the material). In the model of collective sheep motion devised by Fernando Peruani from the University of Cergy Paris, and colleagues, each sheep is similarly assumed to move in a direction determined by interactions with all the others that depend on their distance apart and their relative angles of orientation. The model predicts the sheep will fall into loose alignment and move in a line, following a leader, that takes a more or less sinuous path over the terrain.

For one thing, says Peruani, “real flocks are not continuously on the move. Animals have to eat, rest, find new feeding areas and so on”. No existing model of collective animal motion could accommodate such intermittent switching between stationary and mobile phases. What’s more, bird murmurations don’t seem to involve any specific individual guiding the collective behaviour, but some animal groups do exhibit a hierarchy of roles.

Elephants, zebras and forest ponies, for example, tend to move in lines such that the animal at the front has a special status. An advantage of such hierarchies is that the groups can respond quickly to decisions made by the leaders, rather than having to come to some consensus within the whole group. On the other hand, it means the group is acting on less information than would be available by pooling that of everyone.

To develop their model of collective sheep behaviour, Peruani and colleagues took a minimalistic approach of watching tiny groups of Merino Arles sheep that consisted of “flocks” of just two to four individuals who were free to move around a large field. They found that the groups spend most of their time grazing but would every so often wander off collectively in a line, following the individual at the front (Nat. Phys. 18 1494).

They also saw that any member of the group is equally likely to take the lead in each of these excursions, selected seemingly at random. In other words, as George Orwell famously suggested for certain pigs, all sheep are equal but some are (temporarily) more equal than others. Peruani and colleagues suspected that this switching of leaders allows some information pooling without forcing the group to be constantly negotiating a decision.

The researchers then devised a simple model of the process in which each individual has some probability of switching from the grazing to the moving state and vice versa – rather like the transition probability for emission of a photon from an excited atom. The empirical data suggested that this probability depends on the group size, with the likelihood getting smaller as the group gets bigger. Once an individual sheep has triggered the onset of the “walking phase”, the others follow to maintain group cohesion.

In their model, each individual feels an attractive, cohesive force towards the others and, when moving, tends to align its orientation and velocity with those of its neighbour(s). Peruani and colleagues showed that the model produces episodic switching between a clustered “grazing mode” and collective motion in a line (figure 1). They could also quantify information exchange between the simulated sheep, and found that probabilistic swapping of the leader role does indeed enable the information available to each individual to be pooled efficiently between all.

Although the group size here was tiny, the team has video footage of a large flocks of sheep adopting the same follow-my-leader formation, albeit in multiple lines at once. They are now conducting a range of experiments to get a better understanding of the behavioural rules – for example, using sirens to look at how sheep respond to external stimuli and studying herds composed of sheep of different ages (and thus proclivities) to probe the effects of variability.

The team is also investigating whether individual sheep trained to move between two points can “seed” that behaviour in an entire flock. But such experiments aren’t easy, Peruani says, because it’s hard to recruit shepherds. In Europe, they tend to live in isolation on low wages, and so aren’t the most forthcoming of scientific collaborators.

The good shepherd

Of course, shepherds don’t traditionally rely on trained sheep to move their flocks. Instead, they use sheepdogs that are trained for many months before being put to work in the field. If you’ve ever watched a sheepdog in action, it’s obvious they do an amazingly complex job – and surely one that physics can’t say much about? Yet mechanical engineer Lakshminarayanan Mahadevan at Harvard University in the US says that the sheepdog’s task is basically an exercise in control theory: finding a trajectory that will guide the flock to a particular destination efficiently and accurately.

Mahadevan and colleagues found that even this phenomenon can be described using a relatively simple model (arXiv:2211.04352). From watching YouTube videos of sheepdogs in action, he figured there were two key factors governing the response of the sheep. “Sheep like to stay together,” he says – the flock has cohesion. And second, sheep don’t like sheepdogs – there is repulsion between sheep and dog. “Is that enough – cohesion plus repulsion?” Mahadevan wondered.

Sheepdogs and a flock of sheep

The researchers wrote down differential equations to describe the animals’ trajectories and then applied standard optimization techniques to minimize a quantity that captures the desired outcome: moving the flock to a specific location without losing any sheep. Despite the apparent complexity of the dynamical problem, they found it all boiled down to a simple picture. It turns out there are two key parameters that determine the best herding strategy: the size of the flock and the speed with which it moves between initial and final positions.

Four possible outcomes emerged naturally from their model. One is simply that the herding fails: nothing a dog can do will get the flock coherently from point A to point B. This might be the case, for example, if the flock is just too big, or the dog too slow. But there are three shepherding strategies that do work.

One involves the dog continually running from one side of the flock to the other, channelling the sheep in the desired direction. This is the method known to shepherds as “droving”. If, however, the herd is relatively small and the dog is fast, there can be a better technique that the team called “mustering”. Here the dog propels the flock forward by running in corkscrews around it. In this case, the flock keeps changing its overall shape like a wobbly ellipse, first elongating and then contracting around the two orthogonal axes, as if breathing. Both strategies are observed in the field (figure 2).

But the final strategy the model generated, dubbed “driving”, is not a tactic that sheepdogs have been observed to use. In this case, if the flock is large enough, the dog can run into the middle of it and the sheep retreat but don’t scatter. Then the dog can push the flock forward from within, like a driver in a car. This approach will only work if the flock is very strongly cohesive, and it’s not clear that real flocks ever have such pronounced “stickiness”.

2 Shepherding strategies: the three types of herding

Diagram of herding patterns

In the model of interactions between a sheepdog and its flock developed by Lakshminarayanan Mahadevan at Harvard University and coworkers, optimizing a mathematical function that describes how well the dog transports the flock results in three possible shepherding strategies, depending on the precise parameters in the model. In “droving”, the dog runs from side to side to steer the flock towards the target location. In “mustering”, the dog takes a helix-like trajectory, repeatedly encircling the flock. And in “driving”, the dog steers the flock from “inside” by the aversion – modelled as a repulsive force – of the sheep for the dog.

These three regimes, derived from agent-based models (ABM) and models based on ordinary differential equations (ODE), are plotted above. In the left column, the mean path of the flock (blue) over time is shown as it is driven by a shepherd on a separate path (red) towards a target (green square). Columns 2-4 show snapshots from column 1, with trajectories indicated in black, where fading indicates history. From left to right, snapshots represent the flock at later time points.

These herding scenarios can be plotted on a phase diagram, like the temperature–density diagram for states of matter, but with flock size and speed as the two axes. But do sheepdogs, or their trainers, have an implicit awareness of this phase diagram, even if they did not think of it in those terms? Mahadevan suspects that herding techniques are in fact developed by trial and error – if one strategy doesn’t work, they will try another.

Mahadevan admits that he and his colleagues have neglected some potentially important aspects of the problem. In particular, they assumed that the animals can see in every direction around them. Sheep do have a wide field of vision because, like most prey-type animals, they have eyes on the sides of their heads. But dogs, like most predators, have eyes at the front and therefore a more limited field of view. Mahadevan suspects that incorporating these features of the agents’ vision will shift the phase boundaries, but not alter the phase diagram qualitatively.

Another confounding factor is that sheep might alter their behaviour in different circumstances. Chemical engineer Tuhin Chakrabortty of the Georgia Institute of Technology in Atlanta, together with biomolecular engineer Saad Bhamla, have also used physics-based modelling to look at the shepherding problem. They say that sheep behave differently on their own from how they do in a flock. A lone sheep flees from a dog, but in a flock they employ a more “selfish” strategy, with those on the periphery trying to shove their way inside to be sheltered by the others.

3 Heavy and light: how flocks interact with sheepdogs

How flocks interact with sheepdogs

In the agent-based model of the interaction between sheep and a sheepdog devised by Tuhin Chakrabortty and Saad Bhamla, sheep may respond to a nearby dog by reorienting themselves to face away from or at right angles to it. Different sheep might have different tendencies for this – “heavy” sheep ignore the dog unless they are facing towards it. The task of the dog could be to align the flock facing away from it (herding) or to divide the flock into differently aligned subgroups (shedding).

What’s more, says Chakrabortty, contrary to the stereotype, sheep can show considerable individual variation in how they respond to a dog. Essentially, the sheep have personalities. Some seem terrified and easily panicked by a dog while others might ignore – or even confront – it. Shepherds traditionally call the former sort of sheep “light”, and the latter “heavy” (figure 3).

In the agent-based model used by Chakrabortty and Bhamla, the outcomes differ depending on whether a herd is predominantly light or heavy (arXiv:2406.06912). When a simulated herd is subjected to the “pressure” of a shepherding dog, it might do one of three things: flee in a disorganized way, shedding panicked individuals; flock in a cohesive group; or just carry on grazing while reorienting to face at right angles to the dog, as if turning away from the threat.

Again these behaviours can be summarized in a 2D phase diagram, with axes representing the size of the herd and what the two researchers call the “specificity of the sheepdog stimulus” (figure 4). This factor depends on the ratio of the controlling stimulus (the strength of sheep–dog repulsion) and random noisiness in the sheep’s response. Chakrabortty and Bhamla say that sheepdog trials are conducted for herd sizes where all three possible outcomes are well represented, creating an exacting test of the dog’s ability to get the herd to do its bidding.

4 Fleeing, flocking and grazing: types of sheep movement

Graph showing types of sheep movement

The outcomes of the shepherding model of Chakrabortty and Bhamla can be summarized in a phase diagram showing the different behavioural options – uncoordinated fleeing, controlled flocking, or indifferent grazing – as a function of two model parameters: the size of the flock Ns and the “specificity of stimulus”, which measures how strongly the sheep respond to the dog relative to their inherent randomness of action. Sheepdog trials are typically conducted for a flock size that allows for all three phases.

Into the wild

One of the key differences between the movements of sheep and those of fish or birds is that sheep are constrained to two dimensions. As condensed-matter physicists have come to recognize, the dimensionality of a problem can make a big difference to phase behaviour. Mahadevan says that dolphins make use of dimensionality when they are trying to shepherd schools of fish to feed on. To make them easier to catch, dolphins will often push the fish into shallow water first, converting a 3D problem to a 2D problem. Herders like sheepdogs might also exploit confinement effects to their benefit, for example using fences or topographic features to help contain the flock and simplify the control problem. Researchers haven’t yet explored these issues in their models.

Dolphins using herding tactics to drive a school of fish

As the case of dolphins shows, herding is a challenge faced by many predators. Mahadevan says he has witnessed such behaviour himself in the wild while observing a pack of wild dogs trying to corral wildebeest. The problem is made more complicated if the prey themselves can deploy group strategies to confound their predator – for example, by breaking the group apart to create confusion or indecision in the attacker, a behaviour seemingly adopted by fish. Then the situation becomes game-theoretic, each side trying to second-guess and outwit the other.

Sheep seem capable of such smart and adaptive responses. Bhamla says they sometimes appear to identify the strategy that a farmer has signalled to the dog and adopt the appropriate behaviour even without much input from the dog itself. And sometimes splitting a flock can be part of the shepherding plan: this is actually a task dogs are set in some sheepdog competitions, and demands considerable skill. Because sheepdogs seem to have an instinct to keep the flock together, they can struggle to overcome that urge and have to be highly trained to split the group intentionally.

Iain Couzin of the Max Planck Institute of Animal Behavior in Konstanz, Germany, who has worked extensively on agent-based models of collective animal movement, cautions that even if physical models like these seem to reproduce some of the phenomena seen in real life, that doesn’t mean the model’s rules reflect what truly governs the animals’ behaviour. It’s tempting, he says, to get “allured by the beauty of statistical physics” at the expense of the biology. All the same, he adds that whether or not such models truly capture what is going on in the field, they might offer valuable lessons for how to control and guide collectives of agent-like entities.

In particular, the studies of shepherding might reveal strategies that one could program into artificial shepherding agents such as robots or drones. Bhamla and Chakrabortty have in fact suggested how one such swarm control algorithm might be implemented. But it could be harder than it sounds. “Dogs are extremely good at inferring and predicting the idiosyncrasies of individual sheep and of sheep–sheep interactions,” says Chakrabortty. This allows them to adapt their strategy on the fly. “Farmers laugh at the idea of drones or robots,” says Bhamla. “They don’t think the technology is ready yet. The dogs benefit from centuries of directed evolution and training.”

Perhaps the findings could be valuable for another kind of animal herding too. “Maybe this work could be applied to herding kids at a daycare,” Bhamla jokes. “One of us has small kids and recognizes the challenges of herding small toddlers from one room to another, especially at a party. Perhaps there is a lesson here.” As anyone who has ever tried to organize groups of small children might say: good luck with that.

The post Field work – the physics of sheep, from phase transitions to collective motion appeared first on Physics World.

]]>
Feature Physics sheds a new insight on the behaviour of sheep flocks, helping with new tips on shepherding https://physicsworld.com/wp-content/uploads/2024/10/2024-09-Ball-sheep-flock-aerial-FRONTIS-colourKR.jpg newsletter
New on-chip laser fills long sought-after green gap https://physicsworld.com/a/new-on-chip-laser-fills-long-sought-after-green-gap/ Thu, 26 Sep 2024 08:30:13 +0000 https://physicsworld.com/?p=116980 Devices will be important for applications in quantum sensing and computing, biology, underwater communications and display technologies

The post New on-chip laser fills long sought-after green gap appeared first on Physics World.

]]>
A series of visible-light colours generated by a microring resonator

On-chip lasers that emit green light are notoriously difficult to make. But researchers at the National Institute of Standards and Technology (NIST) and the NIST/University of Maryland Joint Quantum Institute may now have found a way to do just this, using a modified optical component known as a ring-shaped microresonator. Green lasers are important for applications including quantum sensing and computing, medicine and underwater communications.

In the new work, a research team led by Kartik Srinivasan modified a silicon nitride microresonator such that it was able to convert infrared laser light into yellow and green light. The researchers had already succeeded in using this structure to convert infrared laser light into red, orange and yellow wavelengths, as well as a wavelength of 560 nm, which lies at the edge between yellow and green light. Previously, however, they were not able to produce the full range of yellow and green colours to fill the much sought-after “green gap”.

More than 150 distinct green-gap wavelengths

To overcome this problem, the researchers made two modifications to their resonator. The first was to thicken it by 100 nm so that it could more easily generate green light with wavelengths down to 532 nm. Being able to produce such a short wavelength means that the entire green wavelength range is now covered, they say. In parallel, they modified the cladding surrounding the microresonator by etching away part of the silicon dioxide layer that it was fabricated on. This alteration made the output colours less sensitive to the dimension of the microring.

These changes meant that the team could produce more than 150 distinct green-gap wavelengths and could fine tune these too. “Previously, we could make big changes – red to orange to yellow to green – in the laser colours we could generate with OPO [optical parametric oscillation], but it was hard to make small adjustments within each of these colour bands,” says Srinivasan.

Like the previous microresonator, the new device works thanks to a process known as nonlinear wave mixing. Here, infrared light that is pumped into the ring-shaped structure is confined and guided within it. “This infrared light circulates around the ring hundreds of times due to its low loss, resulting in a build-up of intensity,” explains Srinivasan. “This high intensity enables the conversion of pump light to other wavelengths.”

Third-order optical parametric oscillation

“The purpose of the microring is to enable relatively modest, input continuous-wave laser light to build up in intensity to the point that nonlinear optical effects, which are often thought of as weak, become very significant,” says team member Xiyuan Lu.

The specific nonlinear optical process the researchers use is third-order optical parametric oscillation. “This works by taking light at a pump frequency np and creating one beam of light that’s higher in frequency (called the signal, at a frequency ns) and one beam that’s lower in frequency (called the idler, at a frequency ni),” explains first author Yi Sun. “There is a basic energy conservation requirement that 2np= ns+ ni.”

Simply put, this means that for every two pump photons that are used to excite the system, one signal photon and one idler photon are created, he tells Physics World.

Towards higher power and a broader range of colours

The NIST/University of Maryland team has been working on optical parametric oscillation as a way to convert near-infrared laser light to visible laser light for several years now. One of their main objectives was to fill the green gap in laser technology and fabricate frequency-converted lasers for quantum, biology and display applications.

“Some of the major applications we are ultimately targeting are high-end lasers, continuous-wave single-mode lasers covering the green gap or even a wider range of frequencies,” reveals team member Jordan Stone. “Applications include lasers for quantum optics, biology and spectroscopy, and perhaps laser/hologram display technologies.”

For now, the researchers are focusing on achieving higher power and a broader range of colours (perhaps even down to blue wavelengths). They also hope to make devices that can be better controlled and tuned. “We are also interested in laser injection locking with frequency-converted lasers, or using other techniques to further enhance the coherence of our lasers,” says Stone.

The work is detailed in Light: Science & Applications.

The post New on-chip laser fills long sought-after green gap appeared first on Physics World.

]]>
Research update Devices will be important for applications in quantum sensing and computing, biology, underwater communications and display technologies https://physicsworld.com/wp-content/uploads/2024/09/27-09-24-Color-Series-NIST.jpg
Researchers exploit quantum entanglement to create hidden images https://physicsworld.com/a/researchers-exploit-quantum-entanglement-to-create-hidden-images/ Wed, 25 Sep 2024 13:00:43 +0000 https://physicsworld.com/?p=116973 Encoding an image into the quantum correlations of photon pairs makes it invisible to conventional imaging techniques

The post Researchers exploit quantum entanglement to create hidden images appeared first on Physics World.

]]>
Encoding images in photon correlations

Ever since the double-slit experiment was performed, physicists have known that light can be observed as either a wave or a stream of particles. For everyday imaging applications, it is the wave-like aspect of light that manifests, with receptors (natural or artificial) capturing the information contained within the light waves to “see” the scene being observed.

Now, Chloé Vernière and Hugo Defienne from the Paris Institute of Nanoscience at Sorbonne University have used quantum correlations to encode an image into light such that it only becomes visible when particles of light (photons) are observed by a single-photon sensitive camera – otherwise the image is hidden from view.

Encoding information in quantum correlations

In a study described in Physical Review Letters, Vernière and Defienne managed to hide an image of a cat from conventional light measurement devices by encoding the information in quantum entangled photons, known as a photon-pair correlation. To achieve this, they shaped spatial correlations between entangled photons – in the form of arbitrary amplitude and phase objects – to encode image information within the pair correlation. Once the information is encoded into the photon pairs, it is undetectable by conventional measurements. Instead, a single-photon detector known as an electron-multiplied charge couple device (EMCCD) camera is needed to “show” the hidden image.

“Quantum entanglement is a fascinating phenomenon, central to many quantum applications and a driving concept behind our research,” says Defienne. “In our previous work, we demonstrated that, in certain cases, quantum correlations between photons are more resistant to external disturbances, such as noise or optical scattering, than classical light. Inspired by this, we wondered how this resilience could be leveraged for imaging. We needed to use these correlations as a support – a ‘canvas’ – to imprint our image, which is exactly what we’ve achieved in this work.”

How to hide an image

The researchers used a technique known as spontaneous parametric down-conversion (SPDC), which is used in many quantum optics experiments, to generate the entangled photons. SPDC is a nonlinear process that uses a nonlinear crystal (NLC) to split a single high-energy photon from a pump beam into two lower energy entangled photons. The properties of the lower energy photons are governed by the geometry and type of the NLC and the characteristics of the pump beam.

In this study, the researchers used a continuous-wave laser that produced a collimated beam of horizontally polarized 405 nm light to illuminate a standing cat-shaped mask, which was then Fourier imaged onto an NLC using a lens. The spatially entangled near-infrared (810 nm) photons, produced after passing through the NLC, were then detected using another lens and the EMCCD.

This SPDC process produces an encoded image of a cat. This image does not appear on regular camera film and only becomes visible when the photons are counted one by one using the EMCCD. This allowed the image of the cat to be “hidden” in light and unobservable by traditional cameras.

“It is incredibly intriguing that an object’s image can be completely hidden when observed classically with a conventional camera, but then when you observe it ‘quantumly’ by counting the photons one by one and examining their correlations, you can actually see it,” says Vernière, a PhD student on the project. “For me, it is a completely new way of doing optical imaging, and I am hopeful that many powerful applications will emerge from it.”

What’s next?

This research has extended on previous work and Defienne says that the team’s next goal is to show that this new method of imaging has practical applications and is not just a scientific curiosity. “We know that images encoded in quantum correlations are more resistant to external disturbances – such as noise or scattering – than classical light. We aim to leverage this resilience to improve imaging depth in scattering media.”

When asked about the applications that this development could impact, Defienne tells Physics World: “We hope to reduce sensitivity to scattering and achieve deeper imaging in biological tissues or longer-range communication through the atmosphere than traditional technologies allow. Even though we are still far from it, this could potentially improve medical diagnostics or long-range optical communications in the future.”

The post Researchers exploit quantum entanglement to create hidden images appeared first on Physics World.

]]>
Research update Encoding an image into the quantum correlations of photon pairs makes it invisible to conventional imaging techniques https://physicsworld.com/wp-content/uploads/2024/09/25-09-24-hidden-image-featured.jpg newsletter1
Ambipolar electric field helps shape Earth’s ionosphere https://physicsworld.com/a/ambipolar-electric-field-helps-shape-earths-ionosphere/ Wed, 25 Sep 2024 07:53:39 +0000 https://physicsworld.com/?p=116952 Scientists make first ever measurements of a planet-wide field that could be as fundamental as gravity and magnetic fields

The post Ambipolar electric field helps shape Earth’s ionosphere appeared first on Physics World.

]]>
A drop in electric potential of just 0.55 V measured at altitudes of between 250–768 km in the Earth’s atmosphere above the North and South poles could be the first direct measurement of our planet’s long-sought after electrostatic field. The measurements, from NASA’s Endurance mission, reveal that this field is important for driving how ions escape into space and shaping the upper layer of the atmosphere, known as the ionosphere.

Researchers first predicted the existence of the ambipolar electric field in the 1960s as the first spacecraft flying over the Earth’s poles detected charged particles (including positively-charged hydrogen and oxygen ions) flowing out from the atmosphere. The theory of a planet-wide electric field was developed to directly explain this “polar wind”, but the effects of this field were thought to be too weak to be detectable. Indeed, if the ambipolar field was the only mechanism driving the electrostatic field of Earth, then the resulting electric potential drop across the exobase transition region (which lies at an altitude of between 200–780 km) could be as low as about 0.4 V.

A team of researchers led by Glyn Collinson at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, has now succeeded in measuring this field for the first time thanks to a new instrument called a photoelectron spectrometer, which they developed. The device was mounted on the Endurance rocket, which was launched from Svalbard in the  Norwegian Arctic in May 2022. “Svalbard is the only rocket range in the world where you can fly through the polar wind and make the measurements we needed,” says team member Suzie Imber, who is a space physicist at the University of Leicester, UK.

Just the “right amount”

The spacecraft reached an altitude of 768.03 km, where it remained for 19 min while the onboard spectrometer measured the energies of electrons there every 10 seconds. It measured a drop in electric potential of 0.55 V±0.09 V over an altitude range of 258–769 km. While tiny, this is just the “right amount” to explain the polar wind without any other atmospheric effects, says Collinson.

The researchers showed that the ambipolar field, which is generated exclusively by the outward pressure of ionospheric electrons, increases the “scale height” of the ionosphere by as much as 271% (from a height of 77.0 km to a height of 208.9 km). This part of the atmosphere therefore remains denser to greater heights than it would if the field did not exist. This is because the field increases the supply of cold oxygen ions (O+) to the magnetosphere (that is, near the peak at 768 km) by more than 3.8%, so counteracting the effects of other mechanisms (such as wave-particle interactions) that can heat and accelerate these particles to velocities high enough for them to escape into space. The field also probably explains why the magnetosphere is made up primarily of cold hydrogen ions (H+).

The ambipolar field could be as fundamental for our planet as its gravity and magnetic fields, says Collinson, and it may even have helped shape how the atmosphere evolved. Similar fields might also exist on other planets in the solar system with an atmosphere, including Venus and Mars. “Understanding the forces that cause Earth’s atmosphere to slowly leak to space may be important for revealing what makes Earth habitable and why we’re all here,” he tells Physics World. “It’s also crucial to accurately forecast the impact of geomagnetic storms and ‘space weather’.”

Looking forward, the scientists say they would like to make further measurements of the Earth’s ambipolar field in the future. Happily, they recently received endorsement for a follow-up rocket – called Resolute – to do just this.

The post Ambipolar electric field helps shape Earth’s ionosphere appeared first on Physics World.

]]>
Research update Scientists make first ever measurements of a planet-wide field that could be as fundamental as gravity and magnetic fields https://physicsworld.com/wp-content/uploads/2024/09/endurance-launch-photo.jpg newsletter1
Light-absorbing dye turns skin of a live mouse transparent https://physicsworld.com/a/light-absorbing-dye-turns-skin-of-a-live-mouse-transparent/ Tue, 24 Sep 2024 15:00:54 +0000 https://physicsworld.com/?p=116964 The technique could be used to observe a wide range of deep-seated biological structures and activity

The post Light-absorbing dye turns skin of a live mouse transparent appeared first on Physics World.

]]>
One of the difficulties when trying to image biological tissue using optical techniques is that tissue scatters light, which makes it opaque. This scattering occurs because the different components of tissue, such as water and lipids, have different refractive indices, and it limits the depth at which light can penetrate.

A team of researchers at Stanford University in the US has now found that a common water-soluble yellow dye (among several other dye molecules) that strongly absorbs near-ultraviolet and blue light can help make biological tissue transparent in just a few minutes, thus allowing light to penetrate more deeply. In tests on mice skin, muscle and connective tissue, the team used the technique to observe a wide range of deep-seated structures and biological activity.

In their work, the research team – led by Zihao Ou (now at The University of Texas at Dallas), Mark Brongersma and Guosong Hong – rubbed the common food dye tartrazine, which is yellow/red in colour, onto the abdomen, scalp and hindlimbs of live mice. By absorbing light in the blue part of the spectrum, the dye altered the refractive index of the water in the treated areas at red-light wavelengths, such that it more closely matched that of lipids in this part of the spectrum. This effectively reduced the refractive-index contrast between the water and the lipids and allowed the biological tissue to appear more transparent at this wavelength, albeit tinged with red.

In this way, the researchers were able to visualize internal organs, such as the liver, small intestine and bladder, through the skin without requiring any surgery. They were even able to observe fluorescent protein-labelled enteric neurons in the abdomen and monitor the movements of these nerve cells. This enabled them to generate maps showing different movement patterns in the gut during digestion. They were also able to visualize blood flow in the rodents’ brains and the fine structure of muscle sarcomere fibres in their hind limbs.

Reversible effect

The skin becomes transparent in just a few minutes and the effect can be reversed by simply rinsing off the dye.

So far, this “optical clearing” study has only been conducted on animals. But if extended to humans, it could offer a variety of benefits in biology, diagnostics and even cosmetics, says Hong. Indeed, the technique could help make some types of invasive biopsies a thing of the past.

“For example, doctors might be able to diagnose deep-seated tumours by simply examining a person’s tissue without the need for invasive surgical removal. It could potentially make blood draws less painful by helping phlebotomists easily locate veins under the skin and could also enhance procedures like laser tattoo removal by allowing more precise targeting of the pigment beneath the skin,” Hong explains. “If we could just look at what’s going on under the skin instead of cutting into it, or using radiation to get a less than clear look, we could change the way we see the human body.”

Hong tells Physics World that the collaboration originated from a casual conversation he had with Brongersma, at a café on Stanford’s campus during the summer of 2021. “Mark’s lab specializes in nanophotonics while my lab focuses on new strategies for enhancing deep-tissue imaging of neural activity and light delivery for optogenetics. At the time, one of my graduate students, Nick Rommelfanger (third author of the current paper), was working on applying the ‘Kramers-Kronig’ relations to investigate microwave–brain interactions. Meanwhile, my postdoc Zihao Ou (first author of this paper) had been systematically screening a variety of dye molecules to understand their interactions with light.”

Tartrazine emerged as the leading candidate, says Hong. “This dye showed intense absorption in the near-ultraviolet/blue spectrum (and thus strong enhancement of refractive index in the red spectrum), minimal absorption beyond 600 nm, high water solubility and excellent biocompatibility, as it is an FD&C-approved food dye.”

“We realized that the Kramers-Kronig relations could be applied to the resonance absorption of dye molecules, which led me to ask Mark about the feasibility of matching the refractive index in biological tissues, with the aim of reducing light scattering,” Hong explains. “Over the past three years, both our labs have had numerous productive discussions, with exciting results far exceeding our initial expectations.”

The researchers say they are now focusing on identifying other dye molecules with greater efficiency in achieving tissue transparency. “Additionally, we are exploring methods for cells to express intensely absorbing molecules endogenously, enabling genetically encoded tissue transparency in live animals,” reveals Hong.

The study is detailed in Science.

The post Light-absorbing dye turns skin of a live mouse transparent appeared first on Physics World.

]]>
Research update The technique could be used to observe a wide range of deep-seated biological structures and activity https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_Zihao-Ou-Lab-1a.jpg newsletter1
Science thrives on constructive and respectful peer review https://physicsworld.com/a/science-thrives-on-constructive-and-respectful-peer-review/ Tue, 24 Sep 2024 12:42:06 +0000 https://physicsworld.com/?p=116969 Unhelpful or rude feedback can shake the confidence of early career researchers

The post Science thrives on constructive and respectful peer review appeared first on Physics World.

]]>
It is Peer Review Week and celebrations are well under way at IOP Publishing (IOPP), which brings you the Physics World Weekly podcast.

Reviewer feedback to authors plays a crucial role in the peer-review process, boosting the quality of published papers to the benefit of authors and the wider scientific community. But sometimes authors receive very unhelpful or outright rude feedback about their work. These inappropriate comments can shake the confidence of early career researchers, and even dissuade them from pursuing careers in science.

Our guest in this episode is Laura Feetham-Walker, who is reviewer engagement manager at IOPP. She explains how the publisher is raising awareness of the importance of constructive and respectful peer review feedback and how innovations can help to create a positive peer review culture.

As part of the campaign, IOPP asked some leading physicists to recount the worst reviewer comments that they have received – and Feetham-Walker shares some real shockers in the podcast.

IOPP has created a video called “Unprofessional peer reviews can harm science” in which leading scientists share inappropriate reviews that they have received.

The publisher also offers a  Peer Review Excellence  training and certification programme, which equips early-career researchers in the physical sciences with the skills to provide constructive feedback.

The post Science thrives on constructive and respectful peer review appeared first on Physics World.

]]>
Podcasts Unhelpful or rude feedback can shake the confidence of early career researchers https://physicsworld.com/wp-content/uploads/2024/09/Laura-Feetham-Walker.jpg newsletter1
Convection enhances heat transport in sea ice https://physicsworld.com/a/convection-enhances-heat-transport-in-sea-ice/ Tue, 24 Sep 2024 08:42:25 +0000 https://physicsworld.com/?p=116946 New mathematical framework could allow for more accurate climate models

The post Convection enhances heat transport in sea ice appeared first on Physics World.

]]>
The thermal conductivity of sea ice can significantly increase when convective flow is present within the ice. This new result, from researchers at Macquarie University, Australia, and the University of Utah and Dartmouth College, both in the US, could allow for more accurate climate models – especially since current global models only account for temperature and salinity and not convective flow.

Around 15% of the ocean’s surface will be covered with sea ice at some time in a year. Sea ice is a thin layer that separates the atmosphere and the ocean and it is responsible for regulating heat exchange between the two in the polar regions of our planet. The thermal conductivity of sea ice is a key parameter in climate models. It has proved difficult to measure, however, because of its complex structure, made up of ice, air bubbles and brine inclusions, which form as the ice freezes from the surface of the ocean to deeper down. Indeed, sea ice can be thought of as being a porous composite material and is therefore very sensitive to changes in temperature and salinity.

The salty liquid within the brine inclusions is heavier than fresh ocean water. This results in convective flow within the ice, creating channels through which liquid can flow out, explains applied mathematician Noa Kraitzman at Macquarie, who led this new research effort. “Our new framework characterizes enhanced thermal transport in porous sea ice by combining advection-diffusion processes with homogenization theory, which simplifies complex physical properties into an effective bulk coefficient.”

Thermal conductivity of sea ice can increase by a factor of two to three

The new work builds on a 2001 study in which researchers observed an increase in thermal conductivity in sea ice at warmer temperatures. “In our calculations, we had to derive new bounds on the effective thermal conductivity, while also accounting for complex, two-dimensional convective fluid flow and developing a theoretical model that could be directly compared with experimental measurements in the field,” explains Kraitzman. “We employed Padé approximations to obtain the required bounds and parametrized the Péclet number specifically for sea ice, considering it as a saturated rock.”

Padé approximations are routinely used to approximate a function by a rational analysis of given order and the Péclet number is a dimensionless parameter defined as the ratio between the rate of advection to the rate of diffusion.

The results suggest that the effective thermal conductivity of sea ice can increase by a factor of two to three because of conductive flow, especially in the lower, warmer sections of the ice, where temperature and the ice’s permeability favour convection, Kraitzman tells Physics World. “This enhancement is mainly confined to the bottom 10 cm during the freezing season, when convective flows are present within the sea ice. Incorporating these bounds into global climate models could improve their ability to predict thermal transport through sea ice, resulting in more accurate predictions of sea ice melt rates.”

Looking forward, Kraitzman and colleagues say they now hope to acquire additional field measurements to refine and validate their model. They also want to extend their mathematical framework to include more general 3D flows and incorporate the complex fluid exchange processes that exist between ocean and sea ice. “By addressing these different areas, we aim to improve the accuracy and applicability of our model, particularly in ocean-sea ice interaction models, aiming for a better understanding of polar heat exchange processes and their global impacts,” says Kraitzman.

The present work is detailed in Proceedings of the Royal Society A.

The post Convection enhances heat transport in sea ice appeared first on Physics World.

]]>
Research update New mathematical framework could allow for more accurate climate models https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_ProfKG_Homogenization_2024-1600x1000-1.jpg
Short-range order always appears in new type of alloy https://physicsworld.com/a/short-range-order-always-appears-in-new-type-of-alloy/ Mon, 23 Sep 2024 13:00:48 +0000 https://physicsworld.com/?p=116932 New insights into hidden atomic ordering could help in the development of more robust alloys

The post Short-range order always appears in new type of alloy appeared first on Physics World.

]]>
Short-range order plays an important role in defining the properties and performance of “multi-principal element alloys” (MPEAs), but the way in which this order develops is little understood, making it difficult to control. In a surprising new discovery, a US-based research collaboration has have found that this order exists regardless of how MPEAs are processed. The finding will help scientists develop more effective ways to improve the properties of these materials and even tune them for specific applications, especially those with demanding conditions.

MPEAs are a relatively new type of alloy and consist of three or more components in nearly equal proportions. This makes them very different to conventional alloys, which are made from just one or two principal elements with trace elements added to improve their performance.

In recent years, MPEAs have spurred a flurry of interest thanks to their high strength, hardness and toughness over temperature ranges at which traditional alloys, such as steel, can fail. They could also be more resistant to corrosion, making them promising for use in extreme conditions, such as in power plants, or aerospace and automotive technologies, to name but three.

Ubiquitous short-range order

MPEAs were originally thought of as being random solid solutions with the constituent elements being haphazardly dispersed, but recent experiments have shown that this is not the case.

The researchers – from Penn State University, the University of California, Irvine, the University of Massachusetts, Amherst, and Brookhaven National Laboratory – studied the cobalt/chromium/nickel (CoCrNi) alloy, one of the best-known examples of an MPEA. This face-centred cubic (FCC) alloy boasts the highest fracture toughness for an alloy at liquid helium temperatures ever recorded.

Using an improved transmission electron microscopy characterization technique combined with advanced three-dimensional printing and atomistic modelling, the team found that short-range order, which occurs when atoms are arranged in a non-random way over short distances, appears in three CoCrNi-based FCC MPEAs under a variety of processing and thermal treatment conditions.

Their computational modelling calculations also revealed that local chemical order forms in the liquid–solid interface when the alloys are rapidly cooled, even at a rate of 100 billion °C/s. This effect comes from the rapid atomic diffusion in the supercooled liquid, at rates equal to or even greater than the rate of solidification. Short-range order is therefore an inherent characteristic of FCC MPEAs, the researchers say.

The new findings are in contrast to the previous notion that the elements in MPEAs arrange themselves randomly in the crystal lattice if they cool rapidly during solidification. It also refutes the idea that short-range order develops mainly during annealing (a process in which heating and slow cooling are used to improve material properties such as strength, hardness and ductility).

Short-range order can affect MPEA properties, such as strength or resistance to radiation damage. The researchers, who report their work in Nature Communications, say they now plan to explore how corrosion and radiation damage affect the short-range order in MPEAs.

“MPEAs hold promise for structural applications in extreme environments. However, to facilitate their eventual use in industry, we need to have a more fundamental understanding of the structural origins that give rise to their superior properties,” says team co-lead Yang Yang, who works in the engineering science and mechanics department at Penn State.

The post Short-range order always appears in new type of alloy appeared first on Physics World.

]]>
Research update New insights into hidden atomic ordering could help in the development of more robust alloys https://physicsworld.com/wp-content/uploads/2024/09/SRO-photo-CFN-image-contest.jpg
We should treat our students the same way we would want our own children to be treated https://physicsworld.com/a/we-should-treat-our-students-the-same-way-we-would-want-our-own-children-to-be-treated/ Mon, 23 Sep 2024 10:00:38 +0000 https://physicsworld.com/?p=116687 Pete Vukusic says that students' positive experiences matter profoundly

The post We should treat our students the same way we would want our own children to be treated appeared first on Physics World.

]]>
“Thank goodness I don’t have to teach anymore.” These were the words spoken by a senior colleague and former mentor upon hearing about the success of their grant application. They had been someone I had respected. Such comments, however, reflect an attitude that persists across many UK higher-education (HE) science departments. Our departments’ students, our own children even, studying across the UK at HE institutes deserve far better.

It is no secret in university science departments that lecturing, tutoring and lab supervision are perceived by some colleagues to be mere distractions from what they consider their “real” work and purpose to be. These colleagues may evasively try to limit their exposure to teaching, and their commitment to its high-quality delivery. This may involve focusing time and attention solely on research activities or being named on as many research grant applications as possible.

University workload models set time aside for funded research projects, as they should. Research grants provide universities with funding that contributes to their finances and are an undeniably important revenue stream. However, an aversion to – or flagrant avoidance of – teaching by some colleagues is encountered by many who have oversight and responsibility for the organization and provision of education within university science departments.

It is also a behaviour and mindset that is recognized by students, and which negatively impacts their university experience. Avoidance of teaching displayed, and sometimes privately endorsed, by senior or influential colleagues in a department can also shape its culture and compromise the quality of education that is delivered. Such attitudes have been known to diffuse into a department’s environment, negatively impacting students’ experiences and further learning. Students certainly notice and are affected by this.

The quality of physics students’ experiences depends on many factors. One is the likelihood of graduating with skills that make them employable and have successful careers. Others include: the structure, organization and content of their programme; the quality of their modules and the enthusiasm and energy with which they are delivered; the quality of the resources to which they have access; and the extent to which their individual learning needs are supported.

We should always be present and dispense empathy, compassion and a committed enthusiasm to support and enthral our students with our teaching.

In the UK, the quality of departments’ and institutions’ delivery of these and other components has been assessed since 2005 by the National Student Survey (NSS). Although imperfect and continuing to evolve, it is commissioned every year by the Office for Students on behalf of UK funding and regulatory bodies and is delivered independently by Ipsos.

The NSS can be a helpful tool to gather final-year students’ opinions and experiences about their institutions and degree programmes. Publication of the NSS datasets in July each year should, in principle, provide departments and institutions with the information they need to recognize their weaknesses and improve their subsequent students’ experiences. They would normally be motivated to do this because of the direct impact NSS outcomes have on institutions’ league table positions. These league tables can tangibly impact student recruitment and, therefore, an institution’s finances.

My sincerely held contention, however, communicated some years ago to a red-faced finger-wagging senior manager during a fraught meeting, is this. We should ignore NSS outcomes. They don’t, and shouldn’t, matter. This is a bold statement; career-ending, even. I articulated that we and all our colleagues should instead wholeheartedly strive to treat our students as we would want our own children, or our younger selves, to be treated, across every academic aspect and learning-related component of their journey while they are with us. This would be the right and virtuous thing to do.  In fact, if we do this, the positive NSS outcomes would take care of themselves.

Academic guardians

I have been on the frontline of university teaching, research, external examining and education leadership for close to 30 years. My heartfelt counsel, formed during this journey, is that our students’ positive experiences matter profoundly. They matter because, in joining our departments and committing three or more years and many tens of thousands of pounds to us, our students have placed their fragile and uncertain futures and aspirations into our hands.

We should feel privileged to hold this position and should respond to and collaborate with them positively, always supportively and with compassion, kindness and empathy. We should never be the traditionally tough and inflexible guardians of a discipline that is academically demanding, and which can, in a professional physics academic career, be competitively unyielding. That is not our job. Our roles, instead, should be as our students’ academic guardians, enthusiastically taking them with us across this astonishing scientific and mathematical world; teaching, supporting and enabling wherever we possibly can.

A narrative such as this sounds fantastical. It seems far removed from the rigours and tensions of day-in, day-out delivery of lecture modules, teaching labs and multiple research targets. But the metaphor it represents has been the beating heart of the most successfully effective, positive and inclusive learning environments I have encountered in UK and international HE departments during my long academic and professional journey.

I urge physics and science colleagues working in my own and other UK HE departments to remember and consider what it can be like to be an anxious or confused student, whose cognitive processes are still developing, whose self-confidence may be low and who may, separately, be facing other challenges to their circumstances. We should then behave appropriately. We should always be present and dispense empathy, compassion and a committed enthusiasm to support and enthral our students with our teaching. Ego has no place. We should show kindness, patience, and a willingness to engage them in a community of learning, framed by supportive and inclusive encouragement. We should treat our students the way we would want our own children to be treated.

The post We should treat our students the same way we would want our own children to be treated appeared first on Physics World.

]]>
Opinion and reviews Pete Vukusic says that students' positive experiences matter profoundly https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Forum-Vukusic-teacher-and-students-in-3D-printing-lab-875671948-iStock_monkeybusinessimages.jpg newsletter
Working in quantum tech: where are the opportunities for success? https://physicsworld.com/a/working-in-quantum-tech-where-are-the-opportunities-for-success/ Mon, 23 Sep 2024 09:53:55 +0000 https://physicsworld.com/?p=116928 Quantum professionals describe the emerging industry, and the skills required to thrive

The post Working in quantum tech: where are the opportunities for success? appeared first on Physics World.

]]>

The quantum industry in booming. An estimated $42bn was invested in the sector in 2023 and is projected to rise to $106 billion by 2040. In this episode of Physics World Stories, two experts from the quantum industry share their experiences, and give advice on how to enter this blossoming sector. Quantum technologies – including computing, communications and sensing – could vastly outperform today’s technology for certain applications, such as efficient and scalable artificial intelligence.

Our first guest is Matthew Hutchings, chief product officer and co-founder of SEEQC. Based in New York and with facilities in Europe, SEEQC is developing a digital quantum computing platform with a broad industrial market due to its combination of classical and quantum technologies. Hutchings speaks about the increasing need for engineering positions in a sector that to date has been dominated by workers with a PhD in quantum information science.

The second guest is Araceli Venegas-Gomez, founder and CEO of QURECA, which helps to train and recruit individuals, while also providing business development services. Venegas-Gomez’s journey into the sector began with her reading about quantum mechanics as a hobby while working in aerospace engineering. In launching QURECA, she realized there was an important gap to be filled between quantum information science and business – two communities that have tended to speak entirely different languages.

Get even more tips and advice in the recent feature article ‘Taking the leap – how to prepare for your future in the quantum workforce’.

The post Working in quantum tech: where are the opportunities for success? appeared first on Physics World.

]]>
Quantum professionals describe the emerging industry, and the skills required to thrive Quantum professionals describe the emerging industry, and the skills required to thrive Physics World Working in quantum tech: where are the opportunities for success? full false 45:53 Podcasts Quantum professionals describe the emerging industry, and the skills required to thrive https://physicsworld.com/wp-content/uploads/2024/09/Quantum-globe-1169711469-iStock_metamorworks-scaled.jpg newsletter
Thermal dissipation decoheres qubits https://physicsworld.com/a/thermal-dissipation-decoheres-qubits/ Mon, 23 Sep 2024 08:04:21 +0000 https://physicsworld.com/?p=116942 Superconducting quantum bits release their energy into their environment as photons

The post Thermal dissipation decoheres qubits appeared first on Physics World.

]]>
How does a Josephson junction, which is the basic component of a superconducting quantum bit (or qubit), release its energy into the environment? It is radiated as photons, according to new experiments by researchers at Aalto University Finland in collaboration with colleagues from Spain and the US who used a thermal radiation detector known as a bolometer to measure this radiation directly in the electrical circuits holding the qubits. The work will allow for a better understanding of the loss and decoherence mechanism in qubits that can disrupt and destroy quantum information, they say.

Quantum computers make use of qubits to store and process information. The most advanced quantum computers to date – including those being developed by IT giants Google and IBM – use qubits made from superconducting electronic circuits operating at very low temperatures. To further improve qubits, researchers need to better understand how they dissipate heat, says Bayan Karimi, who is the first author of a paper describing the new study. This heat transfer is a form of decoherence – a phenomenon by which the quantum states in qubits revert to behaving like classical 0s and 1s and lose the precious quantum information they contain.

“An understanding of dissipation in a single Josephson junction coupled to an environment remains strikingly incomplete, however,” she explains. “Today, a junction can be modelled and characterized without a detailed knowledge of, for instance, where energy is dissipated in a circuit. But improving design and performance will require a more complete picture.”

Physical environment is important

In the new work, Karimi and colleagues used a nano-bolometer to measure the very weak radiation emitted from a Josephson junction over a broad range of frequencies up to 100::GHz. The researchers identified several operation regimes depending on the junction bias, each with a dominant dissipation mechanism. “The whole frequency-dependent power and shape of the current-voltage characteristics can be attributed to the physical environment of the junction,” says Jukka Pekola, who led this new research effort.

The thermal detector works by converting radiation into heat and is composed of an absorber (made of copper), the temperature of which changes when it detects the radiation. The researchers measure this variation using a sensitive thermometer, comprising a tunnel junction between the copper absorber and a superconductor.

“Our work will help us better understand the nature of heat dissipation of qubits that can disrupt and destroy quantum information and how these coherence losses can be directly measured as thermal losses in the electrical circuit holding the qubits,” Karimi tells Physics World.

In the current study, which is detailed in Nature Nanotechnology, the researchers say they measured continuous energy release from a Josephson junction when it was biased by a voltage. They now aim to find out how their detector can sense single heat loss events when the Josephson junction or qubit releases energy. “At best, we will be able to count single photons,” says Pekola.

The post Thermal dissipation decoheres qubits appeared first on Physics World.

]]>
Research update Superconducting quantum bits release their energy into their environment as photons https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_Picture2.jpg
The physics of cycling’s ‘Everesting’ challenge revealed https://physicsworld.com/a/the-physics-of-cyclings-everesting-challenge-revealed/ Fri, 20 Sep 2024 15:00:04 +0000 https://physicsworld.com/?p=116931 Everesting involves a cyclist riding up and down a given hill multiple times until the ascent totals the elevation of Mount Everest – or 8848 m

The post The physics of cycling’s ‘Everesting’ challenge revealed appeared first on Physics World.

]]>
“Everesting” involves a cyclist riding up and down a given hill multiple times until the ascent totals the elevation of Mount Everest – or 8848 m.

The challenge became popular during the COVID-19 lockdowns and in 2021 the Irish cyclist Ronan McLaughlin was reported to have set a new “Everesting” record of 6:40:54. This was almost 20 minutes faster than the previous world record of 6:59:38 set by the US’s Sean Gardner in 2020.

Yet a debate soon ensued on social media concerning the significant tailwind that day of 5.5 meters per second, which they claimed would have helped McLaughlin to climb the hill multiple times.

But did it? To investigate, Martin Bier, a physicist at East Carolina University in North Carolina, has now analysed what effect air resistance might have when cycling up and down a hill.

“Cycling uses ‘rolling’, which is much smoother and faster, and more efficient [than running],” notes Bier. “All of the work is purely against gravity and friction.”

Bier calculated that a tailwind does help slightly when going uphill, but most of the work when doing so is generating enough power to overcome gravity rather than air resistance.

When coming downhill, however, any headwind becomes significant given that the force of air resistance increases with the square of the cyclist’s speed. The headwind can then have a huge effect, causing a significant reduction in speed.

So, while a tailwind going up is negligible the headwind coming down certainly won’t be. “There are no easy tricks,” Bier adds. “If you want to be a better Everester, you need to lose weight and generate more [power]. This is what matters — there’s no way around it.”

The post The physics of cycling’s ‘Everesting’ challenge revealed appeared first on Physics World.

]]>
Blog Everesting involves a cyclist riding up and down a given hill multiple times until the ascent totals the elevation of Mount Everest – or 8848 m https://physicsworld.com/wp-content/uploads/2024/09/cyclists-silhouette-286024589-Shutterstock_LittlePerfectStock.jpg newsletter
Air-powered computers make a comeback https://physicsworld.com/a/air-powered-computers-make-a-comeback/ Fri, 20 Sep 2024 11:00:44 +0000 https://physicsworld.com/?p=116911 Novel device contains a pneumatic logic circuit made from 21 microfluidic valves

The post Air-powered computers make a comeback appeared first on Physics World.

]]>
A device containing a pneumatic logic circuit made from 21 microfluidic valves could be used as a new type of air-powered computer that does not require any electronic components. The device could help make a wide range of important air-powered systems safer and less expensive, according to its developers at the University of California at Riverside.

Electronic computers rely on transistors to control the flow of electricity. But in the new air-powered computer, the researchers use tiny valves instead of transistors to control the flow of air rather than electricity. “These air-powered computers are an example of microfluidics, a decades-old field that studies the flow of fluids (usually liquids but sometimes gases) through tiny networks of channels and valves,” explains team leader William Grover, a bioengineer at UC Riverside.

By combining multiple microfluidic valves, the researchers were able to make air-powered versions of standard logic gates. For example, they combined two valves in a row to make a Boolean AND gate. This gate works because air will flow through the two valves only if both are open. Similarly, two valves connected in parallel make a Boolean OR gate. Here, air will flow if either one or the other of the valves is open.

Complex logic circuits

Combining an increasing number of microfluidic valves enables the creation of complex air-powered logic circuits. In the new study, detailed in Device, Grover and colleagues made a device that uses 21 microfluidic valves to perform a parity bit calculation – an important calculation employed by many electronic computers to detect errors and other problems.

The novel air-powered computer detects differences in air pressure flowing through the valves to count the number of bits. If there is an error, it outputs an error signal by blowing a whistle. As a proof-of-concept, the researchers used their device to detect anomalies in an intermittent pneumatic compression (IPC) device – a leg sleeve that fills with air and regularly squeezes a patient’s legs to increase blood flow, with the aim of preventing blood clots that could lead to strokes. Normally, these machines are monitored using electronic equipment.

“IPC devices can save lives, but they aren’t as widely employed as they could be,” says Grover. “In part, this is because they’re so expensive. We wanted to see if we could reduce their cost by replacing some of their electronic hardware with pneumatic logic.”

Air’s viscosity is important

Air-powered computers behave very similarly, but not quite identically to electronic computers, Grover adds. “For example, we can often take an existing electronic circuit and make an air-powered version of it and it’ll work just fine, but at other times the air-powered device will behave completely differently and we have to tweak the design to make it function.”

The variations between the two types of computers come down to one important physical difference between electricity and air, he explains: electricity does not have viscosity, but air does. “There are also lots of little design details that are of little consequence in electronic circuits but which become important in pneumatic circuits because of air’s viscosity. This makes our job a bit harder, but it also means we can do things with pneumatic logic that aren’t possible – or are much harder to do – with electronic logic.”

In this work, the researchers focused on biomedical applications for their air-powered computer, but they say that this is just the “tip of the iceberg” for this technology. Air-powered systems are ubiquitous, from the brakes on a train, to assembly-line robots and medical ventilators, to name but three. “By using air-powered computers to operate and monitor these systems, we could make these important systems more affordable, more reliable and safer,” says Grover.

“I have been developing air-powered logic for around 20 years now, and we’re always looking for new applications,” he tells Physics World. “What is more, there are areas in which they have advantages over conventional electronic computers.”

One specific application of interest is moving grain inside silos, he says. These enormous structures hold grain and other agricultural products and people often have to climb inside to spread out the grain – an extremely dangerous task because they can become trapped and suffocate.

“Robots could take the place of humans here, but conventional electronic robots could generate electronic sparks that could create flammable dust inside the silo,” Grover explains. “An air-powered robot, on the other hand, would work inside the silo without this risk. We are thus working on an air-powered ‘brain’ for such a robot to keep people out of harm’s way.”

Air-powered computers aren’t a new idea, he adds. Decades ago, there was a multitude of devices being designed that ran on water or air to perform calculations. Air-powered computers fell out of favour, however, when transistors and integrated circuits made electronic computers feasible. “We’ve therefore largely forgotten the history of computers that ran on things other than electricity. Hopefully, our new work will encourage more researchers to explore new applications for these devices.”

The post Air-powered computers make a comeback appeared first on Physics World.

]]>
Research update Novel device contains a pneumatic logic circuit made from 21 microfluidic valves https://physicsworld.com/wp-content/uploads/2024/09/20-09-24-air-powered-circuit.jpg newsletter1
Quantum hackathon makes new connections https://physicsworld.com/a/quantum-hackathon-makes-new-connections/ Fri, 20 Sep 2024 08:40:32 +0000 https://physicsworld.com/?p=116848 The 2024 UK Quantum Hackathon set new standards for engagement and collaboration

The post Quantum hackathon makes new connections appeared first on Physics World.

]]>
It is said that success breeds success, and that’s certainly true of the UK’s Quantum Hackathon – an annual event organized by the National Quantum Computing Centre (NQCC) that was held in July at the University of Warwick. Now in its third year, the 2024 hackathon attracted 50% more participants from across the quantum ecosystem, who tackled 13 use cases set by industry mentors from the private and public sectors. Compared to last year’s event, participants were given access to a greater range of technology platforms, including software control systems as well as quantum annealers and physical processors, and had an additional day to perfect and present their solutions.

The variety of industry-relevant problems and the ingenuity of the quantum-enabled solutions were clearly evident in the presentations on the final day of the event. An open competition for organizations to submit their problems yielded use cases from across the public and private spectrum, including car manufacturing, healthcare and energy supply. While some industry partners were returning enthusiasts, such as BT and Rolls Royce, newcomers to the hackathon included chemicals firm Johnson Matthey, Aioi R&D Lab (a joint venture between Oxford University spin-out Mind Foundry and the global insurance brand Aioi Nissay Dowa) and the North Wales Police.

“We have a number of problems that are beyond the scope of standard artificial intelligence (AI) or neural networks, and we wanted to see whether a quantum approach might offer a solution,” says Alastair Hughes, lead for analytics and AI at North Wales Police. “The results we have achieved within just two days have proved the feasibility of the approach, and we will now be looking at ways to further develop the model by taking account of some additional constraints.”

The specific use case set by Hughes was to optimize the allocation of response vehicles across North Wales, which has small urban areas where incidents tend to cluster and large swathes of countryside where the crime rate is low. “Our challenge is to minimize response times without leaving some of our communities unprotected,” he explains. “At the moment we use a statistical process that needs some manual intervention to refine the configuration, which across the whole region can take a couple of months to complete. Through the hackathon we have seen that a quantum neural network can deliver a viable solution.”

Teamwork

While Hughes had no prior experience with using quantum processors, some of the other industry mentors are already investigating the potential benefits of quantum computing for their businesses. At Rolls Royce, for example, quantum scientist Jarred Smalley is working with colleagues to investigate novel approaches for simulating complex physical processes, such as those inside a jet engine. Smalley has mentored a team at all three hackathons, setting use cases that he believes could unlock a key bottleneck in the simulation process.

The hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors

“Some of our crazy problems are almost intractable on a supercomputer, and from that we extract a specific set of processes where a quantum algorithm could make a real impact,” he says. “At Rolls Royce our research tends to be focused on what we could do in the future with a fault-tolerant quantum computer, and the hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors.”

Since the first hackathon in 2022, Smalley says that there has been an improvement in the size and capabilities of the hardware platforms. But perhaps the biggest advance has been in the software and algorithms available to help the hackers write, test and debug their quantum code. Reflecting that trend in this year’s event was the inclusion of software-based technology providers, such as Q-CTRL’s Fire Opal and Classiq, that provide tools for error suppression and optimizing quantum algorithms. “There are many more software resources for the hackers to dive into, including algorithms that can even analyse the problems themselves,” Smalley says.

Cathy White, a research manager at BT who has mentored a team at all three hackathons, agrees that rapid innovation in hardware and software is now making it possible for the hackers to address real-world problems – which in her case was to find the optimal way to position fault-detecting sensors in optical networks. “I wanted to set a problem for which we could honestly say that our classical algorithms can’t always provide a good approximation,” she explained. “We saw some promising results within the time allowed, and I’m feeling very positive that quantum computers are becoming useful.”

Both White and Smalley could see a significant benefit from the extended format, which gave hackers an extra day to explore the problem and consider different solution pathways. The range of technology providers involved in the event also enabled the teams to test their solutions on different platforms, and to adapt their approach if they ran into a problem. “With the extra time my team was able to use D-Wave’s quantum annealer as well as a gate-model approach, and it was impressive to see the diversity of algorithms and approaches that the students were able to come up with,” White comments. “They also had more scope to explore different aspects of the problem, and to consolidate their results before deciding what they wanted to present.”

One clear outcome from the extended format was more opportunity to benchmark the quantum solutions against their classical counterparts. “The students don’t claim quantum advantage without proper evidence,” adds White. “Every year we see remarkable progress in the technology, but they can help us to see where there are still challenges to be overcome.”

According to Stasja Stanisic from Phasecraft, one of the four-strong judging panel, a robust approach to benchmarking was one of the stand-out factors for the winning team. Mentored by Aioi R&D Lab, the team investigated a risk aggregation problem, which involved modelling dynamic relationships between data such as insurance losses, stock market data and the occurrence of natural disasters. “The winning team took time to really understand the problem, which allowed them to adapt their algorithm to match their use-case scenario,” Stanisic explains. “They also had a thorough and structured approach to benchmarking their results against other possible solutions, which is an important comparison to make.”

The team presenting their results

Teams were judged on various criteria, including the creativity of the solution, its success in addressing the use case, and investigation of scaling and feasibility. The social impact and ethical considerations of their solution was also assessed. Using the NQCC’s Quantum STATES principles for responsible and ethical quantum computing (REQC), which were developed and piloted at the NQCC, the teams, for example, considered the potential impact of their innovation on different stakeholders and the explainability of their solution. They also proposed practical recommendations to maximize societal benefit. While many of their findings were specific to their use cases, one common theme was the need for open and transparent development processes to build trust among the wider community.

“Quantum computing is an emerging technology, and we have the opportunity right at the beginning to create an environment where ethical considerations are discussed and respected,” says Stanisic. “Some of the teams showed some real depth of thought, which was exciting to see, while the diverse use cases from both the public and private sectors allowed them to explore these ethical considerations from different perspectives.”

Also vital for participants was the chance to link with and learn from their peers. “The hackathon is a place where we can build and maintain relationships, whether with the individual hackers or with the technology partners who are also here,” says Smalley. For Hughes, meanwhile, the ability to engage with quantum practitioners has been a game changer. “Being in a room with lots of clever people who are all sparking off each other has opened my eyes to the power of quantum neural networks,” he says. “It’s been phenomenal, and I’m excited to see how we can take this forward at North Wales Police.”

  • To take part in the 2025 Quantum Hackathon – whether as a hacker, an industry mentor or technology provider – please e-mail the NQCC team at nqcchackathon@stfc.ac.uk

The post Quantum hackathon makes new connections appeared first on Physics World.

]]>
Analysis The 2024 UK Quantum Hackathon set new standards for engagement and collaboration https://physicsworld.com/wp-content/uploads/2024/09/frontis-web.png newsletter
Rheo-electric measurements to predict battery performance from slurry processing https://physicsworld.com/a/rheo-electric-measurements-to-predict-battery-performance-from-slurry-processing/ Fri, 20 Sep 2024 06:58:33 +0000 https://physicsworld.com/?p=116835 The Electrochemical Society in partnership with TA Instruments – Waters, identifies metrics linking formulation and processing of slurries to their performance as LIB electrodes

The post Rheo-electric measurements to predict battery performance from slurry processing appeared first on Physics World.

]]>

The market for lithium-ion batteries (LIBs) is expected to grow ~30x to almost 9 TWh produced annually in 2040 driven by demand from electric vehicles and grid scale storage. Production of these batteries requires high-yield coating processes using slurries of active material, conductive carbon, and polymer binder applied to metal foil current collectors. To better understand the connections between slurry formulation, coating conditions, and composite electrode performance we apply new Rheo-electric characterization tools to battery slurries. Rheo-electric measurements reveal the differences in carbon black structure in the slurry that go undetected by rheological measurements alone. Rheo-electric results are connected to characterization of coated electrodes in LIBs in order to develop methods to predict the performance of a battery system based on the formulation and coating conditions of the composite electrode slurries.

Jeffrey Richards is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on understanding the rheological and electrical properties of soft materials found in emergent energy technologies.

Jeffrey Lopez is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on using fundamental chemical engineering principles to study energy storage devices and design solutions to enable accelerated adoption of sustainable energy technologies.



The post Rheo-electric measurements to predict battery performance from slurry processing appeared first on Physics World.

]]>
Webinar The Electrochemical Society in partnership with TA Instruments – Waters, identifies metrics linking formulation and processing of slurries to their performance as LIB electrodes https://physicsworld.com/wp-content/uploads/2024/09/2024-11-06-webinarimage.jpg
Simultaneous structural and chemical characterization with colocalized AFM-Raman https://physicsworld.com/a/simultaneous-structural-and-chemical-characterization-with-colocalized-afm-raman/ Thu, 19 Sep 2024 15:27:06 +0000 https://physicsworld.com/?p=116806 HORIBA explores how colocalized AFM-Raman enables dual structural and chemical analysis in a single scan, offering deeper insights across diverse applications

The post Simultaneous structural and chemical characterization with colocalized AFM-Raman appeared first on Physics World.

]]>

The combination of Atomic Force Microscopy (AFM) and Raman spectroscopy provides deep insights into the complex properties of various materials. While Raman spectroscopy facilitates the chemical characterization of compounds, interfaces and complex matrices, offering crucial insights into molecular structures and compositions, including microscale contaminants and trace materials. AFM provides essential data on topography and mechanical properties, such as surface texture, adhesion, roughness, and stiffness at the nanoscale.

Traditionally, users must rely on multiple instruments to gather such comprehensive analysis. HORIBA’s AFM-Raman system stands out as a uniquely multimodal tool, integrating an automated AFM with a Raman/photoluminescence spectrometer, providing precise pixel-to-pixel correlation between structural and chemical information in a single scan.

This colocalized approach is particularly valuable in applications such as polymer analysis, where both surface morphology and chemical composition are critical; in semiconductor manufacturing, for detecting defects and characterizing materials at the nanoscale; and in life sciences, for studying biological membranes, cells, and tissue samples. Additionally, it’s ideal for battery research, where understanding both the structural and chemical evolution of materials is key to improving performance.

João Lucas Rangel currently serves as the AFM & AFM-Raman global product manager at HORIBA and holds a PhD in biomedical engineering. Specializing in Raman, infrared, and fluorescence spectroscopies, his PhD research was focused on skin dermis biochemistry changes. At HORIBA Brazil, João started in 2012 as molecular spectroscopy consultant, transitioning into a full-time role as an application scientist/sales support across Latin America, expanding his responsibilities, overseeing the applicative sales support, and co-management of the business activities within the region. In 2022, João was invited to join HORIBA France as a correlative microscopy – Raman application specialist, being responsible to globally develop the correlative business, combing HORIBA’s existing technologies with other complementary technologies. More recently, in 2023, João was promoted to the esteemed position of AFM & AFM-Raman global product manager. In this role, João oversees strategic initiatives aiming at the company’s business sustainability and future development, ensuring its continued success and future growth.

The post Simultaneous structural and chemical characterization with colocalized AFM-Raman appeared first on Physics World.

]]>
Webinar HORIBA explores how colocalized AFM-Raman enables dual structural and chemical analysis in a single scan, offering deeper insights across diverse applications https://physicsworld.com/wp-content/uploads/2024/09/2024-10-22-webinar-image.jpg
Diagnosing and treating disease: how physicists keep you safe during healthcare procedures https://physicsworld.com/a/diagnosing-and-treating-disease-how-physicists-keep-you-safe-during-healthcare-procedures/ Thu, 19 Sep 2024 14:42:15 +0000 https://physicsworld.com/?p=116888 Two medical physicists talk about the future of treatment and diagnostic technologies

The post Diagnosing and treating disease: how physicists keep you safe during healthcare procedures appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features two medical physicists working at the heart of the UK’s National Health Service (NHS). They are Mark Knight, who is chief healthcare scientist at the NHS Kent and Medway Integrated Care Board, and Fiammetta Fedele, who is head of non-ionizing radiation at Guy’s and St Thomas NHS Foundation Trust in London.

They explain how medical physicists keep people safe during healthcare procedures – while innovating new technologies and treatments. They also discuss the role that artificial intelligence could play in medical physics and take a look forward to the future of healthcare.

This episode is supported by RaySearch Laboratories.

RaySearch Laboratories unifies industry solutions, empowering healthcare providers to deliver precise and effective radiotherapy treatment. RaySearch products transform scattered technologies into clarity, elevating the radiotherapy industry.

The post Diagnosing and treating disease: how physicists keep you safe during healthcare procedures appeared first on Physics World.

]]>
Podcasts Two medical physicists talk about the future of treatment and diagnostic technologies https://physicsworld.com/wp-content/uploads/2024/09/Mark-knight-Fiammetta-Fedele.jpg
RadCalc QA: ensuring safe and efficient radiotherapy throughout Australia https://physicsworld.com/a/radcalc-qa-ensuring-safe-and-efficient-radiotherapy-throughout-australia/ Thu, 19 Sep 2024 12:45:15 +0000 https://physicsworld.com/?p=116746 Cancer care provider GenesisCare is using LAP’s RadCalc platform to perform software-based quality assurance of all its radiotherapy treatment plans

The post RadCalc QA: ensuring safe and efficient radiotherapy throughout Australia appeared first on Physics World.

]]>
GenesisCare is the largest private radiation oncology provider in Australia, operating across five states and treating around 30,000 cancer patients each year. At the heart of this organization, ensuring the safety and efficiency of all patient radiotherapy treatments, lies a single server running LAP’s RadCalc quality assurance (QA) software.

RadCalc is a 100% software-based platform designed to streamline daily patient QA. The latest release, version 7.3.2, incorporates advanced 3D algorithms for secondary verification of radiotherapy plans, EPID-based pre-treatment QA and in vivo dosimetry, as well as automated 3D calculation based on treatment log files.

For GenesisCare, RadCalc provides independent secondary verification for 100 to 130 new plans each day, from more than 43 radiation oncology facilities across the country. The use of a single QA platform for all satellite centres helps to ensure that every patient receives the same high standard of care. “With everyone using the same software, we’ve got a single work instruction and we’re all doing things the same way,” says Leon Dunn, chief medical physicist at GenesisCare in Victoria.

“While the individual states operate as individual business units, the physics team operates as one, and the planners operate as one team as well,” adds Peter Mc Loone, GenesisCare’s head of physics for Australia. “We are like one team nationally, so we try to do things the same way. Obviously, it makes sense to make sure everyone’s checking the plans in the same way as well.”

User approved

GenesisCare implemented RadCalc more than 10 years ago, selected in part due to the platform’s impressive reputation amongst its users in Australia. “At that time, RadCalc was well established in radiotherapy and widely used,” explains Dunn. “It didn’t have all the features that it has now, but its basic features met the requirements we needed and it had a pretty solid user base.”

Today, GenesisCare’s physicists employ RadCalc for plan verification of all types of treatment across a wide range of radiotherapy platforms – including Varian and Elekta linacs, Gamma Knife and the Unity MR-linac, as well as superficial treatments and high dose-rate brachytherapy. They also use RadCalc’s plan comparison tool to check that the output from the treatment planning system matches what was imported to the MOSAIQ electronic medical record system.

“Before we had the plan comparison feature, our radiation therapists had to manually check control points in the plan against what was on the machine,” says Mc Loone. “RadCalc checks a wide range of values within the plan. It’s a very quick check that has saved us a lot of time, but also increased the safety aspect. We have certainly picked up errors through its use.”

Keeping treatments safe

The new feature that’s helping to make a big difference, however, is GenesisCare’s recent implementation of RadCalc’s 3D independent recalculation tool. Dunn explains that RadCalc previously performed a 2D comparison between the dose to a single point in the treatment planning system and the calculated dose to that point.

The new module, on the other hand, employs RadCalc’s collapsed-cone convolution algorithm to reconstruct 3D dose on the patient’s entire CT data set. Enabled by the introduction of graphics processing units, the algorithm performs a completely independent 3D recalculation of the treatment plan on the patient’s data.  “We’ve gone from a single point to tens of thousands of points,” notes Dunn.

Importantly, this 3D recalculation can discover any errors within a treatment plan before it gets to the point at which it needs to be measured. “Our priority is for every patient to have that second check done, thereby catching anything that is wrong with the treatment plan, hopefully before it is seen by the doctor. So we can fix things before they could become an issue,” Dunn says, pointing out that in the first couple of months of using this tool, it highlighted potentially suboptimal treatment plans to be improved.

Peter Mc Loone

In contrast, previous measurement-based checks had to be performed at the end of the entire planning process, after everyone had approved the plan and it had been exported to the treatment system. “Finding an error at that point puts a lot of pressure on the team to redo the plan and have everything reapproved,” Mc Loone explains. “By removing that stress and allowing checks to happen earlier in the piece, it makes the overall process safer and more efficient.”

Dunn notes that if the second check shows a problem with the plan, the plan can still be sent for measurements if needed, to confirm the RadCalc findings.

Increasing efficiency

As well as improving safety, the ability to detect errors early on in the planning process speeds up the entire treatment pathway. Operational efficiency is additionally helped by RadCalc’s high level of automation.

Once a treatment plan is created, the planning staff need to export it to RadCalc, with a single click. RadCalc then takes care of everything else, importing the entire data set, sending it to the server for recalculation and then presenting the results. “We don’t have to touch any of the processes until we get the quality checklist out, and that’s a real game changer for us,” says Dunn.

“We have one RadCalc system, that can handle five different states and several different treatment planning systems [Varian’s Eclipse and Elekta’s Monaco and GammaPlan],” notes Mc Loone. “We can have 130 different plans coming in, and RadCalc will filter them correctly and apply the right beam models using that automation that LAP has built in.”

Because RadCalc performs 100% software-based checks, it doesn’t require access to the treatment machine to run the QA (which usually means waiting until the day’s clinical session has finished). “We’re no longer waiting around to perform measurements on the treatment machine,” Dunn explains. “It’s all happening while the patients are being treated during the normal course of the day. That automation process is an important time saver for us.”

This shift from measurement- to software-based QA also has a huge impact on the radiation therapists. As they were already using the machines to treat patients, the therapists were tasked with delivering most of the QA cases – at the end of the day or in between treatment sessions – and informing the physicists of any failures.

“Since we’ve introduced RadCalc, they essentially get all that time back and can focus on doing what they do best, treating patients and making sure it’s all done safely,” says Dunn. “Taking that burden away from them is a great additional bonus.”

Looking to the future, GenesisCare next plans to implement RadCalc’s log file analysis feature, which will enable the team to monitor and verify the performance of the radiotherapy machines. Essentially, the log files generated after each treatment are brought back into RadCalc, which then verifies that what the machine delivered matched the original treatment plan.

“Because we have so many plans going through, delivered by many different accelerators, we can start to build a picture of machine performance,” says Dunn. “In the future, I personally want to look at the data that we collect through RadCalc. Because everything’s coming through that one system, we’ve got a real opportunity to examine safety and quality at a system level, from treatment planning system through to patient treatment.”

The post RadCalc QA: ensuring safe and efficient radiotherapy throughout Australia appeared first on Physics World.

]]>
Analysis Cancer care provider GenesisCare is using LAP’s RadCalc platform to perform software-based quality assurance of all its radiotherapy treatment plans https://physicsworld.com/wp-content/uploads/2024/09/RadCalc-Physics-World.jpg
The free-to-read Physics World Big Science Briefing 2024 is out now https://physicsworld.com/a/the-free-to-read-physics-world-big-science-briefing-2024-is-out-now/ Thu, 19 Sep 2024 12:00:09 +0000 https://physicsworld.com/?p=116843 Find out more about designs for a muon collider and why gender diversity in big science needs recognition

The post The free-to-read <em>Physics World Big Science Briefing</em> 2024 is out now appeared first on Physics World.

]]>
Over the past decades, “big science” has become bigger than ever be it planning larger particle colliders, fusion tokamaks or space observatories. That development is reflected in the growth of the Big Science Business Forum (BSBF), which has been going from strength to strength following its first meeting in 2018 in Copenhagen.

This year, more than 1000 delegates from 500 organizations and 30 countries will descend on Trieste from 1 to 4 October for BSBF 2024. The meeting will see European businesses and organizations such as the European Southern Observatory, the CERN particle-physics laboratory and Fusion 4 Energy come together to discuss the latest developments and business trends in big science.

A key component of the event – as it was at the previous BSBF in Granada, Spain, in 2022 – is the Women in Big Science group, who will be giving a plenary session about initiatives to boost and help women in big science.

In this year’s Physics World Big Science Briefing, Elizabeth Pollitzer – co-founder and director of Portia, which seeks to improve gender equality in science, technology, engineering and mathematics.

She explains why we need gender equality in big science and what measures must be taken to tackle the gender imbalance among staff and users of large research infrastructures.

One prime example of big science is particle physics. Some 70 years since the founding of CERN and a decade following the discovery of the Higgs boson at the lab’s Large Hadron Collider (LHC) in 2012, particle physics stands at a crossroads. While the consensus is that a “Higgs factory” should come next after the LHC, there is disagreement over what kind of machine it should be – a large circular collider some 91 km in circumference or a linear machine just a few kilometres long.

As the wrangling goes on, other proposals are also being mooted such as a muon collider. Despite needing new technologies, a muon collider has the advantage that it would only require a circular collider in a tunnel roughly the size of the LHC.

Another huge multinational project is the ITER fusion tokamak currently under construction in Cadarache, France. Hit by cost hikes and delays for decades, there was more bad news earlier this year when ITER said the tokamak will now not fire up until 2035. ”Full power” mode with deuterium and tritium won’t happen until 2039 some 50 years since the facility was first mooted.

Backers hope that ITER will lay the way towards fusion power plants delivering electricity to the grid, but huge technical challenges lie in store. After all, those reactors will have to breed their own tritium so they become fuel independent, as John Evans explains.

Big science also involves dedicated user facilities. In this briefing we talk to Gianluigi Botton from the Diamond Light Source in the UK and Mike Witherell from the Lawrence Berkeley National Laboratory on managing such large scale research infrastructures and their plans for the future.

We hope you enjoy the briefing and let us know your feedback on the issue.

The post The free-to-read <em>Physics World Big Science Briefing</em> 2024 is out now appeared first on Physics World.

]]>
Blog Find out more about designs for a muon collider and why gender diversity in big science needs recognition https://physicsworld.com/wp-content/uploads/2019/09/cern-cms-crop.jpg 1
Vortex cannon generates toroidal electromagnetic pulses https://physicsworld.com/a/vortex-cannon-generates-toroidal-electromagnetic-pulses/ Thu, 19 Sep 2024 09:34:09 +0000 https://physicsworld.com/?p=116855 Electromagnetic vortex pulses could be employed for information encoding, high-capacity communication and more

The post Vortex cannon generates toroidal electromagnetic pulses appeared first on Physics World.

]]>
electromagnetic cannons emit electromagnetic vortex pulses thanks to coaxial horn antennas

Toroidal electromagnetic pulses can be generated using a device known as a horn microwave antenna. This electromagnetic “vortex cannon” produces skyrmion topological structures that might be employed for information encoding or for probing the dynamics of light–matter interactions, according to its developers in China, Singapore and the UK.

Examples of toroidal or doughnut-like topology abound in physics – in objects such as Mobius strips and Klein bottles, for example. It is also seen in simpler structures like smoke rings in air and vortex rings in water, as well as in nuclear currents. Until now, however, no one had succeeded in directly generating this topology in electromagnetic waves.

A rotating electromagnetic wave structure

In the new work, a team led by Ren Wang from the University of Electronic Science and Technology of China, Yijie Shen from Nanyang Technological University in Singapore and colleagues from the University of Southampton in the UK employed wideband, radially polarized, conical coaxial horn antennas with an operating frequency range of 1.3–10 GHz. They used these antennas to create a rotating electromagnetic wave structure with a frequency in the microwave range.

The antenna comprises inner and outer metal conductors, with 3D-printed conical and flat-shaped dielectric supports at the bottom and top of the coaxial horn, respectively

“When the antenna emits, it generates an instantaneous voltage difference that forms the vortex rings,” explains Shen. “These rings are stable over time – even in environments with lots of disturbances – and maintain their shape and energy over long distances.”

Complex features such as skyrmions

The conical coaxial horn antenna generates an electromagnetic field in free space that rotates around the propagation direction of the wave structure. The researchers experimentally mapped the toroidal electromagnetic pulses at propagation distances of 5, 50 and 100 cm from the horn aperture, using a planar microwave anechoic chamber (a shielded room covered with electromagnetic absorbers) to measure the spatial electromagnetic fields of the antenna, using a scanning frame to move the antenna to the desired measurement area. They then connected a vector network analyser to the transmitting and receiving antennas to obtain the magnitude and phase characteristics of the electromagnetic field at different positions.

The researchers found that the toroidal pulses contained complex features such as skyrmions. These are made up of numerous electric field vectors and can be thought of as two-dimensional whirls (or “spin textures”). The pulses also evolved over time to more closely resemble canonical Hellwarth–Nouchi toroidal pulses. These structures, first theoretically identified by the two physicists they are named after, represent a radically different, non-transverse type of electromagnetic pulse with a toroidal topology. These pulses, which are propagating counterparts of localized toroidal dipole excitations in matter, exhibit unique electromagnetic wave properties, explain Shen and colleagues.

A wide range of applications

The researchers say that they got the idea for their new work by observing how smoke rings are generated from an air cannon. They decided to undertake the study because toroidal pulses in the microwave range have applications in a wide range of areas, including cell phone technology, telecommunications and global positioning. “Understanding both the propagation dynamics and characterizing the topological structure of these pulses is crucial for developing these applications,” says Shen.

The main difficulty faced in these experiments was generating the pulses in the microwave part of the electromagnetic spectrum. The researchers attempted to do this by adapting existing optical metasurface methodologies, but failed because a large metasurface aperture of several metres was required, which was simply too impractical to fabricate. They overcame the problem by making use of a microwave horn emitter that’s more straightforward to create.

Looking forward, the researchers now plan to focus on two main areas. The first is to develop communication, sensing, detection and metrology systems based on toroidal pulses, aiming to overcome the limitations of existing wireless applications. Secondly, they hope to generate higher-order toroidal pulses, also known as supertoroidal pulses.

“These possess unique characteristics such as propagation invariance, longitudinal polarization, electromagnetic vortex streets (organized patterns of swirling vortices) and higher-order skyrmion topologies,” Shen tells Physics World. “The supertoroidal pulses have the potential to drive the development of ground-breaking applications across a range of fields, including defence systems or space exploration.”

The study is detailed in Applied Physics Reviews.

The post Vortex cannon generates toroidal electromagnetic pulses appeared first on Physics World.

]]>
Research update Electromagnetic vortex pulses could be employed for information encoding, high-capacity communication and more https://physicsworld.com/wp-content/uploads/2024/09/19-09-24-electromagnetic-cannon-featured.jpg newsletter1