Baylor University
Department of Physics
College of Arts and Sciences

Baylor > Physics > News
Physics News


News Categories
•  Baylor
•  Colloquium
•  Faculty Meetings
•  Graduate
•  Outreach
•  Research Seminars
•  Social Events
•  SPS


Top News
•  Dark Energy Survey reveals most accurate measurement of universe's dark matter
•  World's Fastest-Swirling Vortex Simulates the Big Bang
•  UCI celestial census indicates that black holes pervade the universe
•  Cosmic map reveals a not-so-lumpy Universe
•  High-Precision Measurement of the Proton’s Atomic Mass
•  Strange Noise in Gravitational-Wave Data Sparks Debate
•  STARSHOT: INSIDE THE PLAN TO SEND A SPACECRAFT TO OUR NEIGHBOR STAR: Hundreds of engineers and scientists have come together to shoot for the stars, literally.
•  Two Students Just Broke a Quantum Computing World Record
•  An easy-to-build desktop muon detector
•  Groundbreaking discovery confirms existence of orbiting supermassive black holes
•  NASA's Kepler Space Telescope Finds Hundreds of New Exoplanets, Boosts Total to 4,034
•  China’s quantum satellite achieves ‘spooky action’ at record distance
•  Scientists make waves with black hole research
•  We Live in a Cosmic Void, Another Study Confirms
•  Scientists Finally Witnessed a Phenomenon That Einstein Thought “Impossible”
•  Charmed Existence: Mysterious Particles Could Reveal Mysteries of the Big Bang
•  A New State of Matter is Discovered – And It’s Strange
•  A Theory of Reality as More Than the Sum of Its Parts
•  Dark Energy May Lurk in the Nothingness of Space
•  What Happens When You Mix Thermodynamics and the Quantum World? A Revolution
•  Alien Civilizations May Number In The Trillions, New Study Says
•  New blackbody force depends on spacetime geometry and topology
•  Gravitational Waves Could Help Us Detect the Universe’s Hidden Dimensions
•  We could detect alien life by finding complex molecules
•  We May Have Uncovered the First Ever Evidence of the Multiverse
•  Scientists Just Discovered an Alien Planet That’s The Best Candidate for Life As We Know It
•  Physicists detect whiff of new particle at the Large Hadron Collider
•  Physicists Discover Hidden Aspects of Electrodynamics
•  A dark matter 'bridge' holding galaxies together has been captured for the first time
•  No, Dark Energy Isn't An Illusion
•  Satellite galaxies at edge of Milky Way coexist with dark matter
•  Magnetic hard drives go atomic
•  Could Mysterious Cosmic Light Flashes Be Powering Alien Spacecraft?
•  NASA is Going to Create The Coldest Spot in the Known Universe
•  Testing theories of modified gravity
•  First Solid Sign that Matter Doesn't Behave Like Antimatter
•  Physicists investigate erasing information at zero energy cost
•  NASA Just Found A Solar System With 7 Earth-Like Planets
•  Nearby Star Has 7 Earth-Sized Worlds - Most In Habitable Zone
•  Data About 2 Distant Asteroids: Clues to the Possible Planet Nine
•  Tune Your Radio: Galaxies Sing When Forming Stars
•  Coders Race to Save NASA's Climate Data
•  You Can Help Scientists Find the Next Earth-Like Planet
•  Scientists Discover Over 100 New Exoplanets
•  Why These Scientists Fear Contact With Space Aliens
•  Scientists May Have Solved the Biggest Mystery of the Big Bang
•  New Research Shows the Universe May Have Once Been a Hologram
•  Dark energy emerges when energy conservation is violated
•  Physicists measure the loss of dark matter since the birth of the universe
•  This star has a secret – even better than 'alien megastructures'
•  Testing theories of modified gravity
•  A simple explanation of mysterious space-stretching ‘dark energy?’
•  Physicists detect exotic looped trajectories of light in three-slit experiment
•  Actual footage shows what it was like to land on Saturn's moon Titan
•  Quaternions are introduced, October 16, 1843
•  The Sound Of Quantum Vacuum
•  Multiple copies of the Standard Model could solve the hierarchy problem
•  Universe May Have Lost 'Unstable' Dark Matter
•  Vera Rubin, Astronomer Who Did Pioneering Work on Dark Matter, Dies at 88
•  China's Hunt for Signals From the Dark Universe
•  Baylor Physics Ph.D. Graduate Quoted in "How Realistic Is the Interstellar Ship from 'Passengers'?"
•  Shutting a new door on locality
•  Unexpected interaction between dark matter and ordinary matter in mini-spiral galaxies
•  Thermodynamics constrains interpretations of quantum mechanics
•  Billions of Stars and Galaxies to Be Discovered in the Largest Cosmic Map Ever
•  Scientists Measure Antimatter for the First Time
•  Europe's Bold Plan for a Moon Base Is Coming Together
•  Einstein's Theory Just Put the Brakes on the Sun's Spin
•  Dying Star Offers Glimpse of Earth's Doomsday in 5B Years
•  Dark Matter Not So Clumpy After All
•  Scientists Catch "Virtual Particles" Hopping In and Out of Existence
•  New theory of gravity might explain dark matter
•  Supersolids produced in exotic state of quantum matter
•  You Can 3D Print Your Own Mini Universe
•  Creating Antimatter Via Lasers?
•  No, Astronomers Haven't Decided Dark Energy Is Nonexistent
•  Behind This Plant's Blue Leaves Lies a Weird Trick of Quantum Mechanics
•  Small entropy changes allow quantum measurements to be nearly reversed
•  Did the Mysterious 'Planet Nine' Tilt the Solar System?
•  Cosmological mystery solved by largest ever map of voids and superclusters
•  The Universe Has 10 Times More Galaxies Than Scientists Thought
•  Correlation between galaxy rotation and visible matter puzzles astronomers
•  The Spooky Secret Behind Artificial Intelligence's Incredible Power
•  Science of Disbelief: When Did Climate Change Become All About Politics?
•  Eyeballing Proxima b: Probably Not a Second Earth
•  Does the Drake Equation Confirm There Is Intelligent Alien Life in the Galaxy?
•  Scientists build world's smallest transistor
•  'Alien Megastructure' Star Keeps Getting Stranger
•  What's Out There? 'Star Men' Doc Tackles Life Questions Through Science
•  Evidence for new form of matter-antimatter asymmetry observed
•  Giant hidden Jupiters may explain lonely planet systems
•  Rarest nucleus reluctant to decay
•  Weird Science: 3 Win Nobel for Unusual States of Matter
•  Methane didn’t warm ancient Earth, new simulations suggest
•  New 'Artificial Synapses' Pave Way for Brain-Like Computers
•  Stephen Hawking Is Still Afraid of Aliens
•  The Ig Nobel Prize Winners of 2016
•  Teleported Laser Pulses? Quantum Teleportation Approaches Sci-Fi Level
•  China Claims It Developed "Quantum" Radar To See Stealth Planes
•  Earth Wobbles May Have Driven Ancient Humans Out of Africa

Dark Energy Survey reveals most accurate measurement of universe's dark matter
[8/4/2017]
Imagine planting a single seed and, with great precision, being able to predict the exact height of the tree that grows from it. Now imagine traveling to the future and snapping photographic proof that you were right.

If you think of the seed as the early universe, and the tree as the universe the way it looks now, you have an idea of what the Dark Energy Survey (DES) collaboration has just done. In a presentation at the American Physical Society Division of Particles and Fields meeting at the U.S. Department of Energy's (DOE) Fermi National Accelerator Laboratory, DES scientists will unveil the most accurate measurement ever made of the present large-scale structure of the universe.

These measurements of the amount and "clumpiness" (or distribution) of dark matter in the present-day cosmos were made with a precision that, for the first time, rivals that of inferences from the early universe by the European Space Agency's orbiting Planck observatory. The new DES result (the tree, in the above metaphor) is close to "forecasts" made from the Planck measurements of the distant past (the seed), allowing scientists to understand more about the ways the universe has evolved over 14 billion years.

"This result is beyond exciting," said Scott Dodelson of Fermilab, one of the lead scientists on this result. "For the first time, we're able to see the current structure of the universe with the same clarity that we can see its infancy, and we can follow the threads from one to the other, confirming many predictions along the way."

Most notably, this result supports the theory that 26 percent of the universe is in the form of mysterious dark matter and that space is filled with an also-unseen dark energy, which is causing the accelerating expansion of the universe and makes up 70 percent.

Paradoxically, it is easier to measure the large-scale clumpiness of the universe in the distant past than it is to measure it today. In the first 400,000 years following the Big Bang, the universe was filled with a glowing gas, the light from which survives to this day. Planck's map of this cosmic microwave background radiation gives us a snapshot of the universe at that very early time.

Since then, the gravity of dark matter has pulled mass together and made the universe clumpier over time. But dark energy has been fighting back, pushing matter apart. Using the Planck map as a start, cosmologists can calculate precisely how this battle plays out over 14 billion years.

"The DES measurements, when compared with the Planck map, support the simplest version of the dark matter/dark energy theory," said Joe Zuntz, of the University of Edinburgh, who worked on the analysis. "The moment we realized that our measurement matched the Planck result within 7 percent was thrilling for the entire collaboration."

The primary instrument for DES is the 570-megapixel Dark Energy Camera, one of the most powerful in existence, able to capture digital images of light from galaxies eight billion light-years from Earth.

The camera was built and tested at Fermilab, the lead laboratory on the Dark Energy Survey, and is mounted on the National Science Foundation's 4-meter Blanco telescope, part of the Cerro Tololo Inter-American Observatory in Chile, a division of the National Optical Astronomy Observatory. The DES data are processed at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.

Scientists on DES are using the camera to map an eighth of the sky in unprecedented detail over five years. The fifth year of observation will begin in August. The new results released today draw from data collected only during the survey's first year, which covers 1/30th of the sky.

"It is amazing that the team has managed to achieve such precision from only the first year of their survey," said National Science Foundation Program Director Nigel Sharp. "Now that their analysis techniques are developed and tested, we look forward with eager anticipation to breakthrough results as the survey continues."

DES scientists used two methods to measure dark matter. First, they created maps of galaxy positions as tracers, and second, they precisely measured the shapes of 26 million galaxies to directly map the patterns of dark matter over billions of light-years, using a technique called gravitational lensing.

To make these ultraprecise measurements, the DES team developed new ways to detect the tiny lensing distortions of galaxy images, an effect not even visible to the eye, enabling revolutionary advances in understanding these cosmic signals. In the process, they created the largest guide to spotting dark matter in the cosmos ever drawn (see image). The new dark matter map is 10 times the size of the one DES released in 2015 and will eventually be three times larger than it is now.

"It's an enormous team effort and the culmination of years of focused work," said Erin Sheldon, a physicist at the DOE's Brookhaven National Laboratory, who co-developed the new method for detecting lensing distortions.

These results and others from the first year of the Dark Energy Survey will be released online and announced during a talk by Daniel Gruen, NASA Einstein fellow at the Kavli Institute for Particle Astrophysics and Cosmology at DOE's SLAC National Accelerator Laboratory, at 5 p.m. Central time. The talk is part of the APS Division of Particles and Fields meeting at Fermilab and will be streamed live.

The results will also be presented by Kavli fellow Elisabeth Krause of the Kavli Insitute for Particle Astrophysics and Cosmology at SLAC at the TeV Particle Astrophysics Conference in Columbus, Ohio, on Aug. 9; and by Michael Troxel, postdoctoral fellow at the Center for Cosmology and AstroParticle Physics at Ohio State University, at the International Symposium on Lepton Photon Interactions at High Energies in Guanzhou, China, on Aug. 10. All three of these speakers are coordinators of DES science working groups and made key contributions to the analysis.

"The Dark Energy Survey has already delivered some remarkable discoveries and measurements, and they have barely scratched the surface of their data," said Fermilab Director Nigel Lockyer. "Today's world-leading results point forward to the great strides DES will make toward understanding dark energy in the coming years."
(FULL STORY)

World's Fastest-Swirling Vortex Simulates the Big Bang
[8/8/2017]
Faster than a tornado, speedier than the giant storm swirling on Jupiter — it's the world's fastest-swirling vortex, which scientists have created in a primordial soup of gluey particles meant to re-create the Big Bang.

The swirling particle soup rotates at head-snapping speeds — many times faster than the closest contenders.

However, don't expect this fast-spinning fluid to turn heads anytime soon, as the vortices occur in a material called a quark-gluon plasma that is so small that the signature of this whirling can be detected only by the particles it produces.


World's Fastest-Swirling Vortex Simulates the Big Bang
An illustration of the quark-gluon plasma created in the Relativistic Heavy Ion Collider at Brookhaven National Laboratory
Credit: Brookhaven National Laboratory
Faster than a tornado, speedier than the giant storm swirling on Jupiter — it's the world's fastest-swirling vortex, which scientists have created in a primordial soup of gluey particles meant to re-create the Big Bang.

The swirling particle soup rotates at head-snapping speeds — many times faster than the closest contenders.

However, don't expect this fast-spinning fluid to turn heads anytime soon, as the vortices occur in a material called a quark-gluon plasma that is so small that the signature of this whirling can be detected only by the particles it produces.


05:31
"We can't look at the quark-gluon plasma; it's on the scale of an atomic nucleus," said Michael Lisa, a physicist at The Ohio State University who works on the Relativistic Heavy Ion Collider (RHIC) collaboration, which produced the new results. [The Big Bang to Civilization: 10 Amazing Origin Events]

Hot soup

Right after the Big Bang, a hot primordial stew of elementary particles called quarks and gluons permeated the baby universe. These elementary particles are the building blocks of better-known particles such as protons and neutrons. This quark-gluon plasma has several unique properties. First, at a blazing 7 trillion to 10 trillion degrees Fahrenheit (3.9 trillion to 5.6 trillion degrees Celsius), it's the hottest known fluid. It is also the densest fluid and "nearly perfect" in that it experiences almost no friction, meaning it flows very easily.

To understand exactly what happened in those moments after the Big Bang, scientists have re-created this primordial particle soup in an atom smasher at the RHIC, at Brookhaven National Laboratory in Upton, New York. The RHIC smashes the nuclei of gold atoms together at nearly the speed of light and then uses ultrasensitive detectors to measure the particles that fly off the collision.

Whirling fluid

In the new study, the team analyzed the quark-gluon plasma's vorticity — essentially a measure of its angular momentum or, in colloquial terms, how fast it spins.

Of course, they had a unique obstacle: The RHIC can produce just a teensy amount of the material, and it lives very fleetingly, or about 10 ^ minus 23 seconds. So there is no way to actually "observe" this fluid in the traditional sense.

Instead, scientists look for signatures of its whirling, based on the particles emitted from the soup, Lisa told Live Science. On average, particles inside a spinning fluid should have spins that roughly align with the angular momentum of the fluid. By measuring how much the particles coming off this whirling soup are deflected from their expected path, the team could calculate a rough estimate for the fluid's vorticity — which roughly measures the local spinning motion. In particular, particles known as lambda baryons tend to decay more slowly than other particles, such as protons and neutrons, meaning the RHIC detectors could more easily track their paths before they vanished.

It turns out, the vorticity in the quark-gluon plasma makes the whirling motion inside a tornado seem like a calm day in the park. The vorticity is the fastest ever recorded — much more rapid than that of Jupiter's Great Red Spot, a swirling storm of gas. It's also faster than the previous record holder, a supercooled type of helium nanodroplet, the researchers reported Aug. 2 in the journal Nature.

Understanding the structure of fluid flow in the plasma could reveal insight into the strong nuclear force, which binds atoms together, the researchers said. Several competing particle theories make predictions about vorticity that could eventually be compared against these experimental results. However, scientists still know too little about the plasma's swirling properties to make definitive conclusions.

"It's too early to say whether it teaches us something fundamental," Lisa said.

Originally published on Live Science.
(FULL STORY)

UCI celestial census indicates that black holes pervade the universe
[8/13/2017]
After conducting a cosmic inventory of sorts to calculate and categorize stellar-remnant black holes, astronomers from the University of California, Irvine have concluded that there are probably tens of millions of the enigmatic, dark objects in the Milky Way - far more than expected.

"We think we've shown that there are as many as 100 million black holes in our galaxy," said UCI chair and professor of physics and astronomy James Bullock, co-author of a research paper on the subject in the current issue of Monthly Notices of the Royal Astronomical Society.

UCI's celestial census began more than a year and a half ago, shortly after the news that the Laser Interferometer Gravitational-Wave Observatory, or LIGO, had detected ripples in the space-time continuum created by the distant collision of two black holes, each the size of 30 suns.

"Fundamentally, the detection of gravitational waves was a huge deal, as it was a confirmation of a key prediction of Einstein's general theory of relativity," Bullock said. "But then we looked closer at the astrophysics of the actual result, a merger of two 30-solar-mass black holes. That was simply astounding and had us asking, 'How common are black holes of this size, and how often do they merge?'"

He said that scientists assume most stellar-remnant black holes - which result from the collapse of massive stars at the end of their lives - will be about the same mass as our sun. To see evidence of two black holes of such epic proportions finally coming together in a cataclysmic collision had some astronomers scratching their heads.

UCI's work was a theoretical investigation into the "weirdness of the LIGO discovery," Bullock said. The research, led by doctoral candidate Oliver Elbert, was an attempt to interpret the gravitational wave detections through the lens of what is known about galaxy formation and to form a framework for understanding future occurrences.

"Based on what we know about star formation in galaxies of different types, we can infer when and how many black holes formed in each galaxy," Elbert said. "Big galaxies are home to older stars, and they host older black holes too."

According to co-author Manoj Kaplinghat, UCI professor of physics and astronomy, the number of black holes of a given mass per galaxy will depend on the size of the galaxy.

The reason is that larger galaxies have many metal-rich stars, and smaller dwarf galaxies are dominated by big stars of low metallicity. Stars that contain a lot of heavier elements, like our sun, shed a lot of that mass over their lives.

When it comes time for one to end it all in a supernova, there isn't as much matter left to collapse in on itself, resulting in a lower-mass black hole. Big stars with low metal content don't shed as much of their mass over time, so when one of them dies, almost all of its mass will wind up in the black hole.

"We have a pretty good understanding of the overall population of stars in the universe and their mass distribution as they're born, so we can tell how many black holes should have formed with 100 solar masses versus 10 solar masses," Bullock said. "We were able to work out how many big black holes should exist, and it ended up being in the millions - way more than I anticipated."

In addition, to shed light on subsequent phenomena, the UCI researchers sought to determine how often black holes occur in pairs, how often they merge, and how long it takes. They wondered whether the 30-solar-mass black holes detected by LIGO were born billions of years ago and took a long time to merge or came into being more recently (within the past 100 million years) and merged soon after.

"We show that only 0.1 to 1 percent of the black holes formed have to merge to explain what LIGO saw," Kaplinghat said. "Of course, the black holes have to get close enough to merge in a reasonable time, which is an open problem."

Elbert said he expects many more gravitation wave detections so that he and other astronomers can determine if black holes collide mostly in giant galaxies. That, he said, would tell them something important about the physics that drive them to coalesce.

According to Kaplinghat, they may not have to wait too long, relatively speaking. "If the current ideas about stellar evolution are right, then our calculations indicate that mergers of even 50-solar-mass black holes will be detected in a few years," he said.
(FULL STORY)

Cosmic map reveals a not-so-lumpy Universe
[8/3/2017]
Odd results could still be consistent with the 'standard model' of cosmology.
(FULL STORY)

High-Precision Measurement of the Proton’s Atomic Mass
[7/18/2017]
We report on the precise measurement of the atomic mass of a single proton with a purpose-built Penning-trap system. With a precision of 32 parts per trillion our result not only improves on the current CODATA literature value by a factor of 3, but also disagrees with it at a level of about 3 standard deviations.
(FULL STORY)

Strange Noise in Gravitational-Wave Data Sparks Debate
[6/30/2017]
The team that discovered gravitational waves put their data online. Now an independent group of researchers claims that they’ve found what might be a serious problem.
(FULL STORY)

STARSHOT: INSIDE THE PLAN TO SEND A SPACECRAFT TO OUR NEIGHBOR STAR: Hundreds of engineers and scientists have come together to shoot for the stars, literally.
[7/11/2017]
By Shannon Stirone
Jul 11, 2017
301
As a species, we have made magnificent strides in robotic space exploration in the past decade. From exploring Pluto close-up for the first time to discovering our solar system is rife with underground liquid oceans, we now understand our little neighborhood of planets and moons better than ever before. It's time to start talking about how we are going to explore the stars.

The Breakthrough Initiatives, created by Russian billionaire physicist Yuri Milner, is one of the most forward-thinking space exploration groups in the world. Among Breakthrough's many ambitious projects is Breakthrough Starshot. The goal is to send hundreds of gram-sized spacecraft to the nearest star—Proxima Centauri, some 4.2 light-years away—and have them arrive within our lifetimes. The craft would then attempt to communicate with Earth and transmit photos of Proxima Centauri and its orbiting planet, Proxima b, back to us.

Advertisement - Continue Reading Below

The Breakthrough Initiatives recently held an international conference called Breakthrough Discuss at Stanford University. Hundreds of researchers and engineers met to flesh out Breakthrough's many ambitious space exploration goals. Starshot attracted perhaps the most interest due to its thrilling prospects and many technical challenges to overcome.

The verdict? "It looks feasible," according to Harvard science professor Avi Loeb who chairs the advisory committee for Breakthrough Starshot.


This artist's impression shows a view of the surface of the planet Proxima b orbiting the red dwarf star Proxima Centauri, the closest star to the solar system.
ESO/M. Kornmesser
Even though the target star system is closer to us than any other, it's still mind-bogglingly far away: 25 trillion miles. Voyager 1, the spacecraft that has traveled farthest from Earth, has been flying at 38,000 mph for forty years, and it's only a tiny fraction closer to Proxima Centauri than it was when it launched. At Voyager's rate, it would take tens of thousands of years for the spacecraft to get anywhere close to Proxima Centauri, even if it were headed in the right direction.

Conventional rocket launches and gravity assist maneuvers just won't take us anywhere near the stars. We need a new plan.

Laser Beams and Light Sails


Concept image of a spherical light sail being accelerated by laser propulsion from Earth.
Michael Stillwell
Spaceflight generally evokes visions of giant rockets with fiery tails erupting off the pad at Cape Canaveral and flying out beyond the atmosphere. To maneuver to a destination after launch, spacecraft often use a liquid rocket fuel called hydrazine. This potent propellant, however, is much too heavy to launch in large quantities. It would be incredibly inefficient just to launch enough fuel to Mars for a return flight, let alone enough for an interstellar voyage. Fortunately, there's a much more efficient way to zip around the stars, and it uses nothing more than energy from beams of electromagnetic radiation.

Light sails are reflective surfaces resembling tin foil that use photons from a source of light, such as a laser beam or the sun, to propel a spacecraft. When the photons of light bounce off the reflective surface, the energy is transferred to a small push, and the craft accelerates in the near-vacuum of space.

Advertisement - Continue Reading Below

The technology isn't just theoretical. In 2010, the Japanese Aerospace Exploration Agency (JAXA) launched a craft called IKAROS—the first successful interplanetary probe to use light sailing as a means of propulsion. The Planetary Society also launched a light sail back in June 2015, and the institution is working on a new sail, the LightSail 2, slated for launch later this year.


Breakthrough Starshot wants to take light sail technology even farther out to space—all the way to Proxima Centauri. Last year, the organization announced a plan to use light sailing and laser propulsion to accelerate dozens or even hundreds of nano-spacecraft fast enough to reach Proxima Centauri in a matter of decades. We're talking about relativistic speeds, roughly 20 percent of the speed of light, or somewhere around 100 million mph. Only at such a ludicrous speed could a probe reach Proxima Centauri in a reasonable two or three decades. Then it will take another four years or so for the radio signals to get back to Earth, traveling at the full-bore speed of light.

"IN DOING THESE CALCULATIONS, NOTHING HAS COME UP YET THAT SEEMS LIKE IT'S NOT POSSIBLE."
The probes themselves would be little more than small computer chips with a smartphone-like camera, a radio transmitter, and a few other basic electronics. Cornell University is currently working on a project called KickSat to develop just this type of tiny spacecraft, which the KickSat team calls "chipsats." These chipsats are to be deployed from a CubeSat after launch, and in the future, little spacecraft with light sails could be released in orbit the same way. Yuri Milner has met with the KickSat team to discuss their chipsats, also known as Sprites, and the possibility of adapting them for a trip to Proxima Centauri.


KickSat
Advertisement - Continue Reading Below

To send light sail probes on a journey of this scale, the energy from the sun isn't going to cut it. The spacecraft's sails would need to be propelled by the light of a powerful, concentrated laser beam. This is one of the most challenging parts of sending nanoprobes on a journey to the stars: building enough laser infrastructure on Earth to propel the small craft.

Dozens of large lasers constructed on the globe would need to work together to form an array and coalesce into one powerful beam of light. According to Breakthrough, an enormous, global network of lasers would need to continuously hit the light sails for only about two minutes to get the little probes up to 20 percent the speed of light.


Convincing the space agencies of the world to contribute to a global laser system comes with its own set of logistical and engineering challenges, but it is certainly possible with largescale cooperation. The Starshot team's problems don't end there, though. There's also the small problem of making sure everything doesn't get shredded to bits as it flies through space at a million miles per hour.

A Spherical Sail

While the laser-propulsion plan has remained unchanged since the initial announcement, the Breakthrough Starshot team is just starting to dig deep into the engineering challenges. The Breakthrough Discuss conference at Stanford was the first large-scale meeting to develop a plan for Starshot, and those championing the mission have no short supply of problems to overcome.

Zach Manchester, an aerospace engineer and creator of the KickSat project, is working with the Starshot team to develop a concept for the solar sail, the main mechanism for getting to Proxima Centauri. Initially he thought a traditional, flat, kite-like sail—similar to the one used by IKAROS—would be the best way to go. But after a year of study, Manchester suggested the Starshot solar sail would probably need to be spherical instead of flat, making it look something like a small disco ball once deployed. Building a spherical sail also introduces the possibility of putting the probe itself inside the sail, rather than having it attached to the middle or towed along behind.


Concept image of a spherical light sail being accelerated with laser propulsion.
Michael Stillwell
Advertisement - Continue Reading Below

The thin sail would need to reflect about 99.999 percent of the powerful laser light or it would burn up almost instantly. The rapid acceleration to one-fifth the speed of light in about two minutes would enact around 60,000 g's on the sail. The material will not only need to be highly reflective, but also sturdy enough to stand up to the forces from 60,000 times the acceleration of Earth's gravity.

Developing the sail could prove harder than building enormous lasers all over the world. And then there's the issue of communicating with a spacecraft that's 25 trillion miles away.

How Will We Know If It Worked?

"We've identified 20 of the biggest challenges, and one of the biggest is the communication delay between the spacecraft and the star, which is 4 light-years away," says Avi Loeb, Professor of Science at Harvard University and chair of the advisory committee for Breakthrough Starshot. "We have to be able to send the photographic data that's being recorded, but you can't focus the beam of the laser at that distance. When we went to look over the numbers it looks feasible—it'll just be very challenging."

Currently, it takes about twenty minutes to receive 250 megabits of data from spacecraft orbiting Mars. Data from Voyager 1 takes more than a day and a half to phone home from 10 billion miles away. Even if the Starshot team gets a spacecraft to Proxima Centauri in a few decades, any photos of the enticing planet Proxima b will take over four years to reach Earth, and the more data we transmit, the longer it will take.


The Atacama Large Millimeter/submillimeter Array (ALMA), the largest radio telescope array in the world, located in the high desert of Chile. Radio telescope arrays such as this will be crucial to detecting a signal from a probe at Proxima Centauri.
ESO/C. Malin
As it stands today, Proxima b is the only planet we know of in the entire Alpha Centauri system, which includes the small red dwarf star Proxima Centauri and two larger stars, Alpha Centauri A and Alpha Centauri B. However, there is a good chance that other planets lurk in the system, and we simply have not spotted them because their orbits do not take them directly in front of their host stars from our perspective. A probe could potentially reveal undiscovered planets in the Alpha Centauri system.

To get all that photographic data back—data that could very well lead to the discovery of new worlds around out closest neighbor stars—we will need to improve our ground-based receivers and radio telescopes. It is possible that a global array of radio dishes could distinguish the signals of the probes. China's new Five-hundred-meter Aperture Spherical radio Telescope (FAST), the largest single-dish radio telescope in the world, is already being used by Milner's Breakthrough Listen mission to search for signals from intelligent life. The enormous dish could be crucial for helping us detect a signal from a nanoprobe at Proxima Centauri.

"We Haven't Found a Deal Breaker Yet"

The Starshot initiative is ambitious and daring to say the least, but it's not the first time humans have set out to test the limits of engineering. Fortunately, both Loeb and Manchester felt great after the two-day discussion. "I came out of it with a lot more hope and a mindset that everyone on board thinks this is doable. We haven't found a deal breaker yet, basically. In doing these calculations nothing has come up yet that seems like it's not possible," says Manchester.


The two bright stars are Alpha Centauri A (left) and Alpha Centauri B (right). The faint red star in the center of the red circle is Proxima Centauri.
Skatebiker
The success of the Starshot project has huge implications not just for interstellar travel, but for the ease of exploring and studying our own solar system. If we can develop a system to launch small probes at relativistic speeds, a spacecraft that would normally take two years to get to Mars could get there in only two hours. If Starshot technology is developed, and we wanted to photograph something in the outer solar system, we could simply launch a nanoprobe to arrive in days or weeks rather than years. The speed of planetary science studies would accelerate tremendously.

While Starshot is still a nascent project, the hundreds of scientists and engineers who attended the conference were in good spirits about the possibilities. They all trust that together they can work out the engineering kinks required to make something of this magnitude work. Surely if the team from Starshot succeeds, whether it's 30 years from now or 100, they will have single-handedly revolutionized the way we explore the cosmos.

We are on the verge of not just interplanetary exploration, but interplanetary infrastructure and industry as well. If Breakthrough can pull off its Starshot, we will be well on our way to a new era of interstellar exploration. It's time to start building some big ol' lasers.
(FULL STORY)

Two Students Just Broke a Quantum Computing World Record
[7/5/2017]
Researchers from Sweden have successfully simulated a 45-qubit quantum circuit, breaking the record for the greatest number of qubits to be simulated. This important milestone puts humanity one step closer to "quantum supremacy," the point at which quantum computers could outperform any traditional computer.
(FULL STORY)

An easy-to-build desktop muon detector
[6/14/2017]
On airplanes I am often asked about the blinking metallic device connected to my laptop’s USB port. To assuage any suspicions, I explain that I’m a third-year physics graduate student at MIT and that the little device is actually a cosmic-ray-muon detector.

Over the past few years that detector has evolved from an instrument for a multimillion-dollar experiment to a device that high school and college physics students can construct themselves. The goal of a new program called CosmicWatch is to encourage students to build the detectors, which weigh in at less than 100 g and cost less than $100, and explore the effects of the particles that are constantly raining down on Earth’s surface.

My foray into muon-detector construction began when my supervisor, Janet Conrad, and I were tasked with assisting in an upgrade of the IceCube Neutrino Observatory, a cubic-kilometer particle detector built deep in the Antarctic glacier near the South Pole. IceCube has the ability to detect the occasional astrophysical neutrino from phenomena such as gamma-ray bursts, supernovae, and black holes (see Physics Today, June 2014, page 30). On a far more regular basis, the observatory sees a drizzle of cosmic-ray muons. The charged particles are a decay product of the particles that form when high-energy cosmic rays collide with molecules in Earth’s atmosphere. Muons are extremely penetrating, which enables a small fraction of them to travel the more than 1.5 km through the Antarctic ice to the IceCube detector.

As part of IceCube’s low-energy upgrade, called PINGU, Conrad and I planned to build optically isolated scintillator targets and place them throughout the detector. If a charged particle passed through the plastic scintillator, it would emit light that we could collect using a silicon photomultiplier. Whenever the photomultiplier registered enough light at the same time as a triggered event in IceCube, we would know that the particle that triggered IceCube also passed through our target; we could use that information to help determine the particle’s location and trajectory. Conrad and I called the targets muon-tagging optical modules.

The first detector prototype was very simple. I filled a small PVC pipe with liquid scintillator and inserted some circuitry and a silicon photomultiplier. Two wires penetrated the PVC cap: one for biasing the photomultiplier and one for outputting data to an oscilloscope. It was not a great design. The scintillator leaked around the cap threads, and the device looked more like a homemade bomb from a cheap movie than a particle detector. But hey, it worked. We could immediately see the signals produced from cosmic-ray muons passing through the scintillator.

The next iteration of the detector did away with the liquid scintillator and PVC piping. We found some centimeter-thick plastic scintillator panels from an old cosmic-ray experiment and built a proper light-tight enclosure from some scrap aluminum found in the machine shop. I also came across an Arduino and high-speed operational amplifiers in the MIT electronics recycling pile. Those parts, along with some pulse-shaping circuitry, resulted in a simple data acquisition system. We were able to record data directly to a computer as well as on the oscilloscope. The cost of the whole device was less than $100, with the photomultiplier accounting for the bulk of the expense.

In a June 2016 paper, we described exactly how we built the detector and provided a website link that contained all the information about our circuit boards, computer-aided design drawings, and Arduino software. Within a few days after submission to the arXiv, emails began pouring in. I was stunned to see that many of them came not from particle astrophysicists but from high school students with their own ideas for measurements or improvements. An MIT student, Mgcini Keith Phuthi, read the paper and modified our design so that his detector would communicate with his laptop through Bluetooth.

Phuthi and several other undergraduate students joined our little group to set up a small production facility. Once we started working with the new students, it was obvious that building the detector touched on several important skills. The students learned about shop practices, working with printed circuit boards, and programming microcontrollers.

We set out to see if our device would be suitable for MIT’s Junior Lab course, a class on physics lab work for undergrads. In the process, we stumbled on another use for the detector. We approached a cabinet in the corner of one lab, and as soon as we were within a meter of it, the count rate exploded; there was obviously something radioactive in there. We had a pretty good idea that it must be coming from some active gamma-ray source. One by one we took each radioactive isotope out of the cabinet and brought it close to the detector. We each had our own guess (I was thinking it would be a new cobalt-60 source), but it turned out the culprit was a large jar partially filled with dark gray powder: uranium salts. Not something I thought you could store in an undergraduate lab.

We also found something interesting in Conrad’s office. On the wall, next to negatives from a bubble chamber and a lead-glass calorimeter, was a bright orange ceramic plate. It turns out that decades ago, Fiesta dinnerware was glazed with a depleted uranium–based coating. Uranium has a very long half-life, and many of the decay daughters emit radiation in the form of gamma rays. I was surprised to see so much radiation coming from dinnerware!

Over the next few weeks, we received many emails from students who wanted to build detectors for high-altitude balloon missions. The appeal of our detector stemmed from the fact that it was small and could be battery (or USB) powered, with data stored locally in a Raspberry Pi. To help with such projects, we decided to redesign the detector one more time to make it lighter and easier to build.

Our latest detector weighs 68 g (the model in our 2016 paper was about 10 times as heavy), draws less than a watt of power, and has an improved low-signal response. The design is so simple that it should take students just a few hours to build a full detector from scratch.

The detector is starting to gain international interest. Recently I started working with Katarzyna Frankiewicz, a PhD student from the National Center for Nuclear Research (NCBJ) in Poland. She and a colleague, Paweł Przewłocki, are working on improving the software side of the detector; they created a website for project information and data acquisition. And in collaboration with NCBJ’s education and training division, Frankiewicz and Przewłocki are about to start a new educational program for high school students using 20 detectors that NCBJ and MIT built together.

Now that we have a unique detector, an international group of enthusiastic scientists, and lots of experience helping students build desktop muon detectors, we are ready to launch the CosmicWatch program. This summer our goal is to produce the first set of 100 kits, which we will use to teach a class on particle detection and astrophysics for incoming students at the Wisconsin IceCube Particle Astrophysics Center and NCBJ. Some of those detectors will be sent to local high schools for teachers to use in demonstrations. Instructors could measure the angular dependence of the cosmic-ray-muon flux, demonstrate relativistic effects with a high-altitude measurement, and conduct muon tomography. Over the winter we will move to the next generation of detectors, which will have single-photon detection and hardware-coincidence capabilities, an SD card reader, and environmental sensors.

We are not alone in the community of cosmic-ray-muon programs. Upon developing the detector, we discovered that several other groups are working toward a similar goal. We are hoping to collaborate with them to expand on what we’ve designed. As the project grows, we hope to be able to use the detectors for useful physics measurements. One idea is to install the detectors on planes and ships to map out cosmic-ray fluxes throughout the world. Of course, that would require further R&D and therefore more funding.

The airplane conversations regarding my strange little USB device typically end here. But I’m able to capture my questioners’ attention at least one last time when I show them the measurement of the cosmic-ray-muon rate, shown in the graph below. The beauty of a good muon detector—even a small, cheap one—is that it transforms a fundamental but invisible aspect of nature into something we can see.


I used one of the detectors to measure the absolute rate of muons on a flight from Boston to Chicago. (The x-axis shows seconds.) As the airplane climbed to a cruising altitude of 9144 m, the drizzle of muons turned into a downpour.
Spencer N. Axani is a graduate student at MIT working with Janet Conrad. He earned an undergraduate degree in physics from the University of Alberta.
(FULL STORY)

Groundbreaking discovery confirms existence of orbiting supermassive black holes
[6/28/2017]
For the first time ever, astronomers at The University of New Mexico say they’ve been able to observe and measure the orbital motion between two supermassive black holes hundreds of millions of light years from Earth – a discovery more than a decade in the making.

UNM Department of Physics & Astronomy graduate student Karishma Bansal is the first-author on the paper, ‘Constraining the Orbit of the Supermassive Black Hole Binary 0402+379’, recently published in The Astrophysical Journal. She, along with UNM Professor Greg Taylor and colleagues at Stanford, the U.S. Naval Observatory and the Gemini Observatory, have been studying the interaction between these black holes for 12 years.

“For a long time, we’ve been looking into space to try and find a pair of these supermassive black holes orbiting as a result of two galaxies merging,” said Taylor. “Even though we’ve theorized that this should be happening, nobody had ever seen it until now.”

In early 2016, an international team of researchers, including a UNM alumnus, working on the LIGO project detected the existence of gravitational waves, confirming Albert Einstein’s 100-year-old prediction and astonishing the scientific community. These gravitational waves were the result two stellar mass black holes (~30 solar mass) colliding in space within the Hubble time. Now, thanks to this latest research, scientists will be able to start to understand what leads up to the merger of supermassive black holes that creates ripples in the fabric of space-time and begin to learn more about the evolution of galaxies and the role these black holes play in it.

“Even though we’ve theorized that this should be happening, nobody had ever seen it until now.” – Professor Greg Taylor, UNM Department of Physics & Astronomy
Using the Very Long Baseline Array (VLBA), a system made up of 10 radio telescopes across the U.S. and operated in Socorro, N.M., researchers have been able to observe several frequencies of radio signals emitted by these supermassive black holes (SMBH). Over time, astronomers have essentially been able to plot their trajectory and confirm them as a visual binary system. In other words, they’ve observed these black holes in orbit with one another.

“When Dr. Taylor gave me this data I was at the very beginning of learning how to image and understand it,” said Bansal. “And, as I learned there was data going back to 2003, we plotted it and determined they are orbiting one another. It’s very exciting.”

For Taylor, the discovery is the result of more than 20 years of work and an incredible feat given the precision required to pull off these measurements. At roughly 750 million light years from Earth, the galaxy named 0402+379 and the supermassive black holes within it, are incredibly far away; but are also at the perfect distance from Earth and each other to be observed.

Bansal says these supermassive black holes have a combined mass of 15 billion times that of our sun, or 15 billion solar masses. The unbelievable size of these black holes means their orbital period is around 24,000 years, so while the team has been observing them for over a decade, they’ve yet to see even the slightest curvature in their orbit.

“If you imagine a snail on the recently-discovered Earth-like planet orbiting Proxima Centauri – 4.243 light years away – moving at 1 cm a second, that's the angular motion we're resolving here,” said Roger W. Romani, professor of physics at Stanford University and member of the research team.

“What we’ve been able to do is a true technical achievement over this 12-year period using the VLBA to achieve sufficient resolution and precision in the astrometry to actually see the orbit happening,” said Taylor. “It’s a bit of triumph in technology to have been able to do this.”

While the technical accomplishment of this discovery is truly amazing, Bansal and Taylor say the research could also teach us a lot about the universe, where galaxies come from and where they’re going.

"The orbits of binary stars provided tremendous insights about stars,” said Bob Zavala, an astronomer with the U.S. Naval Observatory. “Now we'll be able to use similar techniques to understand super-massive black holes and the galaxies they reside within."

Continuing to observe the orbit and interaction of these two supermassive black holes could also help us gain a better understanding of what the future of our own galaxy might look like. Right now, the Andromeda galaxy, which also has a SMBH at its center, is on a path to collide with our Milky Way, meaning the event Bansal and Taylor are currently observing, might occur in our galaxy in a few billion years.

“Supermassive black holes have a lot of influence on the stars around them and the growth and evolution of the galaxy,” explained Taylor. “So, understanding more about them and what happens when they merge with one another could be important for our understanding for the universe.”

Bansal says the research team will take another observation of this system in three or four years to confirm the motion and obtain a precise orbit. In the meantime, the team hopes that this discovery will encourage related work from astronomers around the world.
(FULL STORY)

NASA's Kepler Space Telescope Finds Hundreds of New Exoplanets, Boosts Total to 4,034
[6/19/2017]
NASA has unveiled the complete set of data from the first four years of the agency's Kepler Space Telescope mission, which stared at a single patch of the sky in the search for alien planets. The result: Kepler has discovered 219 new candidates since NASA's last data unveiling, including 10 near-Earth-size planet candidates in the so-called habitable zone around their stars where the conditions are just right for liquid water to exist on a planet's surface — a key feature in the search for habitable worlds.

The new discoveries boost Kepler's total to 4,034 candidate planets during its mission, 2,335 of which were later confirmed by follow-up observations, NASA officials said in a statement. The 10 newfound potentially Earth-size worlds bring Kepler's total up to 50 of that type of exoplanet, with more than 30 of those being confirmed, NASA officials said during a briefing today (June 19).

The researchers also revealed a surprising divide between small, Earth-like planets and mini-Neptunes gleaned from the data. [From the Exoplanet Archive: How NASA Keeps Track of Alien Worlds]

The planets characterized by NASA's Kepler mission (yellow dots) and other surveys split into several different broad planet types. Future exoplanet surveys will reveal small planets orbiting further from their stars in the corner marked "frontier".
The planets characterized by NASA's Kepler mission (yellow dots) and other surveys split into several different broad planet types. Future exoplanet surveys will reveal small planets orbiting further from their stars in the corner marked "frontier".
Credit: NASA/Ames Research Center/Natalie Batalha/Wendy Stenzel




"With this catalog we're able to extend [our analysis of planets' demographics] out to the longest periods, those periods that are most similar to our Earth," said Susan Thompson, a Kepler research scientist for the SETI Institute in California and lead author on the new catalog study.

"As a result, this survey catalog will be the foundation for directly answering one of astronomy's most compelling questions: How many planets like our Earth are actually in the galaxy?"

According to the researchers, Kepler discovered more than 80 percent of all planet candidates and confirmed exoplanets ever found. This catalog is the final release of data from Kepler's four-year primary mission, which examined a narrow patch of sky in the Cygnus constellation. Kepler launched in 2009, and completed its primary mission in 2013. Now, it's in an extended mission known as K2.

To find planets, Kepler uses the transit method: The space telescope tracked stars over a long period of time so scientists could identify when the stars dimmed briefly, which could suggest a planet crossing between the star and Earth.

That process discovered potential planets like the newly found KOI 7711 (short for Kepler object of interest), an exoplanet that appears very much like Earth — just 1.3 times Earth's radius at an orbit that lets the planet feel about as much radiation as Earth gets from the sun. For KOI 7711 and the other planets, the percent the star dimmed let researchers determine its size, and the frequency of the dimming revealed the orbit.

To determine which dimmings of the 200,000 stars observed by Kepler were likely to be planets, the data went through an intensive vetting process. As Thompson described, about 34,000 signals were found — both transiting planets and noise that could have come from the camera or star itself. After vetting, the total came down to about 4,000 candidates, 50 of which were Earth-size and in the habitable zone.

The researchers then put simulated transits into the data and recorded how many were actually picked up by the software — determining how many transits the process might have missed. And they put noise through the process, too, checking how many were marked as transiting planets — so they knew how many planets were likely to be false alarms. [NASA's Planet-Hunting Kepler Explained (Infographic)]

The eighth Kepler planet catalog includes 10 new planet candidates that are less than twice the sized of Earth in their stars habitable zone. Here, 49 such planets from the full catalogue are graphed.
The eighth Kepler planet catalog includes 10 new planet candidates that are less than twice the sized of Earth in their stars habitable zone. Here, 49 such planets from the full catalogue are graphed.
Credit: NASA/Ames Research Center/Wendy Stenzel
During the briefing, researchers also discussed a surprising distinction they found between super-Earths, which are rocky planets with thin atmospheres, up to about 1.75 times Earth's size, and mini-Neptunes that form dense gas balls 2 to 3.5 times the size of Earth.

A research group used the Keck Observatory in Hawaii to gauge the size of 1,300 stars measured by Kepler, which allowed them to more precisely pinpoint the stars' sizes — and therefore the size of their potential planets. They found that while researchers had thought there was a smooth population containing the whole range of sizes between 1 and 4 times that of Earth, there was a much sharper divide.

"This is a major new division in the family tree of exoplanets, somewhat analogous to the discovery that mammals and lizards are separate branches on the tree of life," said Benjamin Fulton, a researcher at the University of Hawaii in Manoa and the California Institute of Technology and lead author on the Keck study.

Researchers combining data from the Keck telescope in Hawaii and the Kepler space telescope found that there's a sharp divide between super-Earths and mini-Neptunes.
Researchers combining data from the Keck telescope in Hawaii and the Kepler space telescope found that there's a sharp divide between super-Earths and mini-Neptunes.
Credit: NASA/Ames Research Center/JPL-Caltech/R. Hurt
That sharp divide likely comes from the planet formation process, Fulton said: Planets' rocky cores form from smaller pieces, and then the protoplanet's gravity attracts hydrogen and helium gas. A little bit of gas makes the planet much bigger, putting it on the mini-Neptune side of things. Planets in the middle, Fulton said, can suffer a setback that puts them back on the rocky super-Earth side of things: The newfound atmosphere can be baked away if the star is too close by or there's not enough to start with.

While the Kepler data set provides the best-ever glimpse of exoplanet demographics for one slice of the sky, future telescopes — like NASA's Transiting Exoplanet Survey Satellite set to launch in 2018 — will allow researchers to follow up on these Kepler finds to characterize the planets even more. They may someday even take direct images of exoplanets with tools like Hubble Space Telescope's successor, the James Webb Space Telescope (also set to launch in 2018). Plus, additional data from Kepler's current K2 mission will give researchers a glimpse into what things look like in other parts of the sky, revealing planets around star clusters of different ages, with different iron contents, and many more low-mass stars than Kepler saw the first time around, the researchers said.

"It feels a bit like the end of an era, but actually I see it as a new beginning," Thompson said. "It's amazing the things that Kepler has found. It has shown us these terrestrial worlds, and we still have all this work to do to really understand how common Earths are in the galaxy."

"I'm really excited to see what people are going to do with this catalog, because this is the first time we have a population that is really well-characterized and we can now do these statistical studies and really start to understand the Earth analogues out there," she added.

Editor's Note: This article was updated at 2:45 p.m. EDT to include more details and background from NASA's press conference. Video produced by Space.com's Steve Spaleta.

Email Sarah Lewin at slewin@space.com or follow her @SarahExplains. Follow us @Spacedotcom, Facebook and Google+. Original article on Space.com.

- See more at: https://www.space.com/37242-nasa-kepler-alien-planets-habitable-worlds-catalog.html#sthash.hQwV1yUi.dpuf
(FULL STORY)

China’s quantum satellite achieves ‘spooky action’ at record distance
[6/15/2017]
Quantum entanglement—physics at its strangest—has moved out of this world and into space. In a study that shows China's growing mastery of both the quantum world and space science, a team of physicists reports that it sent eerily intertwined quantum particles from a satellite to ground stations separated by 1200 kilometers, smashing the previous world record. The result is a stepping stone to ultrasecure communication networks and, eventually, a space-based quantum internet.

"It's a huge, major achievement," says Thomas Jennewein, a physicist at the University of Waterloo in Canada. "They started with this bold idea and managed to do it."

Entanglement involves putting objects in the peculiar limbo of quantum superposition, in which an object's quantum properties occupy multiple states at once: like Schrödinger's cat, dead and alive at the same time. Then those quantum states are shared among multiple objects. Physicists have entangled particles such as electrons and photons, as well as larger objects such as superconducting electric circuits.
SIGN UP FOR OUR DAILY NEWSLETTER

Get more great content like this delivered right to you!


Theoretically, even if entangled objects are separated, their precarious quantum states should remain linked until one of them is measured or disturbed. That measurement instantly determines the state of the other object, no matter how far away. The idea is so counterintuitive that Albert Einstein mocked it as "spooky action at a distance."

Starting in the 1970s, however, physicists began testing the effect over increasing distances. In 2015, the most sophisticated of these tests, which involved measuring entangled electrons 1.3 kilometers apart, showed once again that spooky action is real.

Beyond the fundamental result, such experiments also point to the possibility of hack-proof communications. Long strings of entangled photons, shared between distant locations, can be "quantum keys" that secure communications. Anyone trying to eavesdrop on a quantum-encrypted message would disrupt the shared key, alerting everyone to a compromised channel.

But entangled photons degrade rapidly as they pass through the air or optical fibers. So far, the farthest anyone has sent a quantum key is a few hundred kilometers. "Quantum repeaters" that rebroadcast quantum information could extend a network's reach, but they aren't yet mature. Many physicists have dreamed instead of using satellites to send quantum information through the near-vacuum of space. "Once you have satellites distributing your quantum signals throughout the globe, you've done it," says Verónica Fernández Mármol, a physicist at the Spanish National Research Council in Madrid. "You've leapfrogged all the problems you have with losses in fibers."


CREDITS: (GRAPHIC) C. BICKEL/SCIENCE; (DATA) JIAN-WEI PAN
Jian-Wei Pan, a physicist at the University of Science and Technology of China in Shanghai, got the chance to test the idea when the Micius satellite, named after an ancient Chinese philosopher, was launched in August 2016. The satellite is the foundation of the $100 million Quantum Experiments at Space Scale program, one of several missions that China hopes will make it a space science power on par with the United States and Europe.

In their first experiment, the team sent a laser beam into a light-altering crystal on the satellite. The crystal emitted pairs of photons entangled so that their polarization states would be opposite when one was measured. The pairs were split, with photons sent to separate receiving stations in Delingha and Lijiang, 1200 kilometers apart. Both stations are in the mountains of Tibet, reducing the amount of air the fragile photons had to traverse. This week in Science, the team reports simultaneously measuring more than 1000 photon pairs. They found the photons had opposite polarizations far more often than would be expected by chance, thus confirming spooky action over a record distance (though the 2015 test over a shorter distance was more stringent).

The team had to overcome many hurdles, including keeping the beams of photons focused on the ground stations as the satellite hurtled through space at nearly 8 kilometers per second. "Showing and demonstrating it is quite a challenging task," says Alexander Ling, a physicist at the National University of Singapore. "It's very encouraging." However, Ling notes that Pan's team recovered only about one photon out of every 6 million sent from the satellite—far better than ground-based experiments but still far too few for practical quantum communication.

Pan expects China's National Space Science Center to launch additional satellites with stronger and cleaner beams that could be detected even when the sun is shining. (Micius operates only at night.) "In the next 5 years we plan to launch some really practical quantum satellites," he says. In the meantime, he plans to use Micius to distribute quantum keys to Chinese ground stations, which will require longer strings of photons and additional steps. Then he wants to demonstrate intercontinental quantum key distribution between stations in China and Austria, which will require holding one half of an entangled photon pair on board until the Austrian ground station appears within view of the satellite. He also plans to teleport a quantum state—a technique for transferring quantum-encoded information without moving an actual object—from a third Tibetan observatory to the satellite.

Other countries are inching toward quantum space experiments of their own. Ling is teaming up with physicists in Australia to send quantum information between two satellites, and the Canadian Space Agency recently announced funding for a small quantum satellite. European and U.S. teams are also proposing putting quantum instruments on the International Space Station. One goal is to test whether entanglement is affected by a changing gravitational field, by comparing a photon that stays in the weaker gravitational environment of orbit with an entangled partner sent to Earth, says Anton Zeilinger, a physicist at the Austrian Academy of Sciences in Vienna. "There are not many experiments which test links between gravity and quantum physics."

The implications go beyond record-setting demonstrations: A network of satellites could someday connect the quantum computers being designed in labs worldwide. Pan's paper "shows that China is making the right decisions," says Zeilinger, who has pushed the European Space Agency to launch its own quantum satellite. "I'm personally convinced that the internet of the future will be based on these quantum principles."

Posted in: PhysicsSpace
DOI: 10.1126/science.aan6972
(FULL STORY)

Scientists make waves with black hole research
[6/14/2017]
Scientists at the University of Nottingham have made a significant leap forward in understanding the workings of one of the mysteries of the universe. They have successfully simulated the conditions around black holes using a specially designed water bath.

Their findings shed new light on the physics of black holes with the first laboratory evidence of the phenomenon known as the superradiance, achieved using water and a generator to create waves.
The research - Rotational superradiant scattering in a vortex flow - has been published in Nature Physics. It was undertaken by a team in the Quantum Gravity Laboratory in the School of Physics and Astronomy.
The work was led by Silke Weinfurtner from the School of Mathematical Sciences. In collaboration with an interdisciplinary team she designed and built the black hole 'bath' and measurement system to simulate black hole conditions.
Dr Weinfurtner said: "This research has been particularly exciting to work on as it has bought together the expertise of physicists, engineers and technicians to achieve our common aim of simulating the conditions of a black hole and proving that superadiance exists. We believe our results will motivate further research on the observation of superradiance in astrophysics."
What is superradiance?
The Nottingham experiment was based on the theory that an area immediately outside the event horizon of a rotating black hole - a black hole's gravitational point of no return - will be dragged round by the rotation and any wave that enters this region, but does not stray past the event horizon, should be deflected and come out with more energy than it carried on the way in - an effect known as superradiance.
Superadiance - the extraction of energy from a rotating black hole - is also known as the Penrose Mechanism and is a precursor of Hawking Radiation - a quantum version of black-hole superradiance.
What's in the Black Hole Lab?
Dr Weinfurtner said: "Some of the bizzare black hole phenomena are hard, if not, impossible to study directly. This means there are very limited experimental possibilities. So this research is quite an achievement."
The 'flume', is specially designed 3m long, 1.5m wide and 50cm deep bath with a hole in the centre. Water is pumped in a closed circuit to establish a rotating draining flow. Once at the desired depth waves were generated at varied frequenices until the supperadiant scattering effect is created and recorded using a specially designed 3D air fluid interface sensor.
Tiny dots of white paper punched out by a specially adapted sewing machine were used to measure the flow field - the speed of the fluid flow around the analogue black hole.
It all started from humble beginnings
This research has been many years in the making. The initial idea for creating a supperradiant effect with water started with a bucket and bidet. Dr Weinfurtner said: "This research has grown from humble beginnings. I had the initial idea for a water based experiment when I was at the International School for Advanced Studies (SISSA) in Italy and I set up an experiment with a bucket and a bidet. However, when it caused a flood I was quickly found a lab to work in!
After her postdoc, Dr Weinfurtner went on to work with Bill Unruh, the Canadian born physicist who also has a made seminal contributions to our understanding of gravity, black holes, cosmology, quantum fields in curved spaces, and the foundations of quantum mechanics, including the discovery of the Unruh effect.
Her move to the University of Nottingham accelerated her research as she was able to set up her own research group with support from the machine shop in the School of Physics and Astronomy.
Explore further: Water circling drain experiments offer insight into black holes
More information: Theo Torres et al, Rotational superradiant scattering in a vortex flow, Nature Physics (2017). DOI: 10.1038/nphys4151


Read more at: https://phys.org/news/2017-06-scientists-black-hole.html#jCp
(FULL STORY)

We Live in a Cosmic Void, Another Study Confirms
[6/14/2017]
Earth and its parent galaxy are living in a cosmic desert — a region of space largely devoid of other galaxies, stars and planets, according to a new study.

The findings confirm the results of a previous study based on observations taken in 2013. That previous study showed that Earth's galaxy, the Milky Way, is part of a so-called cosmic void. These voids are part of the large-scale structure of the universe, which looks sort of like a block of Swiss cheese, made up of dense filaments containing huge collections of galaxies surrounding relatively empty regions.

The KBC void

The cosmic void that contains the Milky Way's is dubbed the Keenan, Barger and Cowie (KBC) void, after the three astronomers who identified it in the 2013 study. It is the largest cosmic void ever observed — about seven times larger than the average void, with a radius of about 1 billion light-years, according to the study.

Advertisement



The KBC void is shaped like a sphere, and is surrounded by a shell of galaxies, stars and other matter. The new study shows this model of the KBC void is not ruled out based on additional observational data, Amy Barger, an observational cosmologist at the University of Wisconsin-Madison who was involved with both studies, said in a statement from the university.

Barger's undergraduate student who led the study, Benjamin Hoscheit, spoke about their work at the American Astronomical Society meeting in Austin, Texas, on June 6.

Hoscheit sought an efficient way to verify the results of the 2013 study, but in a shorter time span. That work was led by Ryan Keenan, Barger's doctoral student at the time at the University of Hawaii.

Whereas Keenan's work measured the density of different areas of the universe using galaxy catalogs, Hoscheit verified the work using a measurement called the kinematic Sunyaev-Zel'dovich (kSZ) effect, which measures the motions of galaxy clusters within the cosmic web.

The kSZ effect looks at photons coming from the cosmic microwave background (CMB), or light left over from an early stage in the universe's evolution. As the distant CMB photons pass through galaxy clusters, the photons shift in energy. This shift in energy shows how the galaxy clusters are moving, Hoscheit said.

Galaxy clusters that exist in a cosmic void should be attracted to regions with stronger gravity. That would be revealed in how fast these galaxy clusters move through space, Hoscheit said. But if the clusters were moving more slowly than expected, then perhaps the conclusions of the previous study would need to be rethought, he said. However, the kSZ effect on the clusters was consistent with that in the 2013 study, Hoscheit added.

Follow us @Spacedotcom, Facebook and Google+. Original article on Space.com.
(FULL STORY)

Scientists Finally Witnessed a Phenomenon That Einstein Thought “Impossible”
[6/11/2017]
Astronomers have observered a phenomenon known as gravitational microlensing in stars for the first time. Predicted by Einstein as part of his theory of general relativity, this could help measure the mass of distant stars using gravitational deflection.
(FULL STORY)

Charmed Existence: Mysterious Particles Could Reveal Mysteries of the Big Bang
[6/9/2017]
A mysterious particle created in a blazing fireball at an atom smasher is misbehaving, a new experiment shows.

The particle, called a charm quark, revealed surprising interactions with its neighboring subatomic particles, measurements show. That discovery could improve scientists' understanding of the conditions that existed soon after the Big Bang, when the universe was permeated by a primordial soup of elementary particles, and possibly show hints of physics beyond what scientists know today. [Wacky Physics: The Coolest Little Particles in Nature]

Back to the beginning

The surprising charm-quark behavior was first spotted at Brookhaven National Laboratory's Relativistic Heavy Ion Collider (RHIC) in Upton, New York, which aims to recreate conditions in the trillionths of a second after the Big Bang. The key to the new observation is the Heavy Flavor Tracker (HFT), a set of recently installed ultrasensitive photodetectors similar to those in digital cameras. Using the HFT, for the first time, researchers directly measured the behavior of charm quarks as they emerged from the trillion-degree fireball meant to recreate the universe's first moments.

Advertisement



To recreate these primeval conditions, the RHIC fires gold atoms at one another at nearly the speed of light. As they collide, the atoms break up into a soup of elementary, free-flowing particles known as a quark-gluon plasma. Quarks make up more familiar particles, like protons and neutrons, while gluons are the carriers of the strong nuclear force that holds the quarks together.

The measurements tell the physicists whether their models of fields that bind together quarks and gluons, based on a theory called quantum chromodynamics, are correct, according to a new study detailing the findings.

"You can study how nuclear medium behaves and functions at these high temperatures," Brookhaven National Laboratory physicist Flemming Videbaek, a coauthor of the study, told Live Science.

Heavy interactions

Quarks and their antimatter counterparts come in six varieties, known to physicists as "flavors": up, down, top, bottom, strange and charm. They have different masses; the up and down quarks that make up protons and neutrons are the lightest. Charm quarks are the second heaviest, behind top quarks. They never form in ordinary conditions on Earth; a particle accelerator is necessary to make them. [7 Strange Facts About Quarks]

Albert Einstein's famous E = mc2 equation says energy and mass are the same thing, and when the atomic nuclei collide in the RHIC, the energy is so great that it creates heavier, exotic particles, such as charm quarks.

One of the particles formed by this fiery collision is the D-zero, made up of a charm quark and an anti-up quark. The D-zeros travel for a fraction of a millimeter before they decay and become two other particles: kaons and pions. It's the kaons and pions that the experimenters actually "see" with the HFT.

What surprised the researchers was that the flow of quark-gluon plasma caught the heavy D-zero particles. The football-shaped fireball emitted more D-zeros from the wider part than from the ends, rather than in an evenly distributed way. Previous models predicted that the D-zero, which contains the heavy charm quark, was too massive to interact with the quarks and gluons in the plasma. According to those models, its mass would mean the D-zero barreled out too quickly, before the plasma's forces could act on it, and the plasma would not last long enough to produce much interaction.

Instead, the quark-gluon plasma has a low viscosity; if it were a fluid, it would flow freely, Videbaek said.

"The fact that it has a low viscosity means that it interacts [with the particles] quite a bit," Videbaek said. That means "some of the models were quite far off."

In addition to helping scientists refine their models, the charm quarks revealed more details about how the quark-gluon plasma behaves. Knowing more about what such plasmas actually do helps scientists understand what to look for if they seek out new physical laws, and helps them understand the implications of the ones they know already.

In future experiments, the team hopes to gain insight into the behavior of other heavy and rare particles made up of quarks, such as the B (or "beauty") meson, which is made of a bottom quark and one of its lighter cousins, Videbaek said.

The study was published May 26 in the journal Physical Review Letters.

Originally published on Live Science.
(FULL STORY)

A New State of Matter is Discovered – And It’s Strange
[6/8/2017]
A researcher has proven that a new state of matter, a phase transition, is possible in our 3D universe at low temperatures in "disordered" materials like glass. This discovery will shape future research on these materials.
(FULL STORY)

A Theory of Reality as More Than the Sum of Its Parts
[6/1/2017]
New math shows how, contrary to conventional scientific wisdom, conscious beings and other macroscopic entities might have greater influence over the future than does the sum of their microscopic compo
(FULL STORY)

Dark Energy May Lurk in the Nothingness of Space
[5/26/2017]
A new study may help reveal the nature of dark energy, the mysterious substance that is pushing the universe to expand outward. Dark energy may emerge from fluctuations in the nothingness of empty space, a new hypothesis suggests.

That idea, in turn, could also explain why the cosmological constant, a mathematical constant that Albert Einstein conjured up yet famously called "the biggest blunder of his life," takes the value it does. [8 Ways You Can See Einstein's Theory of Relativity in Real Life]

The new study proposed that the expansion is driven by fluctuations in the energy carried by the vacuum, or regions of space devoid of matter. The fluctuations create pressure that forces space itself to expand, making matter and energy less dense as the universe ages, said study co-author Qingdi Wang, a doctoral student at the University of British Columbia (UBC) in Canada.

Advertisement

Accelerating universe

Scientists call the force that pushes the universe to expand a cosmological constant (though it isn't a "force" in the strict sense). This constant is the energy density of space itself. If it is greater than zero, then Einstein's equations of relativity, which describe the structure of space-time, imply an expanding universe. In the late 1990s, measurements of distant supernovas showed that the universe was accelerating, not just expanding. Cosmologists call the energy that drives that acceleration dark energy. Whatever dark energy is, it dissipates more slowly than matter or dark matter, and doesn't clump together the way either of them do under the influence of gravity.

This acceleration has been a big quandary for physicists, because it contradicts the predictions of quantum field theories, the theoretical frameworks that describe the interactions of the tiniest subatomic particles. Quantum field theories predict vacuum energies that are so large that the universe shouldn't exist at all, said Lucas Lombriser, postdoctoral fellow at the Royal Observatory, Edinburgh, in Scotland, who was not involved in the new study. This discrepancy is called the "old" cosmological constant problem, and physicists generally thought that once new physics was discovered, the cosmological constant would disappear; expansion would be explained in some other way.

However, when scientists discovered the accelerated expansion, a new problem arose. According to theoretical calculations, the cosmological constant should be 50 to 120 orders of magnitude larger than it is, with a correspondingly large rate of expansion, Lombriser said.

Essentially, the energy density of the universe (how much energy there is per unit volume) should be gigantic, and it clearly isn't.

Fluctuations in empty space

The new work addresses not only what dark energy is but why the rate of universal expansion has the value it does.

"Everybody wants to know what dark energy is," Wang told Live Science. "I reconsidered this question more carefully," from the perspective of the universe's energy density.

Wang and his colleagues assumed that modern quantum field theory was correct about the energy density being very large, but that the vacuum fluctuations, or the movements of empty space, were very large on tiny scales, near what is called the Planck length, or 1.62 × 10 ^ minus 35 meters. That's so small that a proton is 100 million trillion times bigger.

"Every point in space is going through expansion and contraction," he said. "But it looks smooth just like a table looks smooth from far away."

The vacuum fluctuations, in Wang's formulation, are like children on a swing pumping their legs. Even though nobody is pushing them, they manage to impart extra energy on the swing, making the swing go up higher than it would otherwise. This phenomenon is called parametric resonance, which basically means that some piece of the system — the expansion and contraction, or the swinging of the child's legs — changes with time. In this case, the density of a very tiny portion of the universe is changing, Wang said.

Since the fluctuations are little bits of the universe expanding and contracting, this tiny resonance adds up on cosmological scales, he said. So the universe expands. (Expansion and contraction of space doesn't violate conservation laws, because space itself is doing the expanding).

As a result of Wang's approach, there's no need for any new fields, as in some dark energy models. Instead the expansion of the universe is roughly the same as that already predicted by quantum field theory.

Observations needed

While Wang's idea is a good one, that doesn't mean it's the end of the story, Lombriser said. The question is whether observations of the universe bear the theory out, he said.

"So far, they can argue that the vacuum contribution is in the right ballpark for what is being observed (which, if it holds up, is already a huge success)," Lombriser said in an email. "They have not yet made an accurate prediction for the exact observed value, but this is something they intend to further investigate in their future work."

Other physicists are more skeptical.

"On these high-energy scales, classical general relativity doesn't work any longer, but that's what they use. So, their approximation is interesting, but it's not well-justified, because in this limit, one should be using quantum gravity (a theory which we don't have)," Sabine Hossenfelder, a research fellow at the Frankfurt Institute for Advanced Studies in Germany, told Live Science via email.

"This paper is simply a first step in the process," said study co-author William Unruh, a physicist at UBC. "But I think the path is worth pursuing, as our results are suggestive."

The study is published in the May 15 issue of the journal Physical Review D.
(FULL STORY)

What Happens When You Mix Thermodynamics and the Quantum World? A Revolution
[5/6/2017]
IN HIS 1824 book, Reflections on the Motive Power of Fire, the 28-year-old French engineer Sadi Carnot worked out a formula for how efficiently steam engines can convert heat—now known to be a random, diffuse kind of energy—into work, an orderly kind of energy that might push a piston or turn a wheel. To Carnot’s surprise, he discovered that a perfect engine’s efficiency depends only on the difference in temperature between the engine’s heat source (typically a fire) and its heat sink (typically the outside air). Work is a byproduct, Carnot realized, of heat naturally passing to a colder body from a warmer one.


ABOUT
Original story reprinted with permission from Quanta Magazine, an editorially independent division of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences

Carnot died of cholera eight years later, before he could see his efficiency formula develop over the 19th century into the theory of thermodynamics: a set of universal laws dictating the interplay among temperature, heat, work, energy and entropy—a measure of energy’s incessant spreading from more- to less-energetic bodies. The laws of thermodynamics apply not only to steam engines but also to everything else: the sun, black holes, living beings and the entire universe. The theory is so simple and general that Albert Einstein deemed it likely to “never be overthrown.”

Yet since the beginning, thermodynamics has held a singularly strange status among the theories of nature.

“If physical theories were people, thermodynamics would be the village witch,” the physicist Lídia del Rio and co-authors wrote last year in Journal of Physics A. “The other theories find her somewhat odd, somehow different in nature from the rest, yet everyone comes to her for advice, and no one dares to contradict her.”

Unlike, say, the Standard Model of particle physics, which tries to get at what exists, the laws of thermodynamics only say what can and can’t be done. But one of the strangest things about the theory is that these rules seem subjective. A gas made of particles that in aggregate all appear to be the same temperature—and therefore unable to do work—might, upon closer inspection, have microscopic temperature differences that could be exploited after all. As the 19th-century physicist James Clerk Maxwell put it, “The idea of dissipation of energy depends on the extent of our knowledge.”

In recent years, a revolutionary understanding of thermodynamics has emerged that explains this subjectivity using quantum information theory—“a toddler among physical theories,” as del Rio and co-authors put it, that describes the spread of information through quantum systems. Just as thermodynamics initially grew out of trying to improve steam engines, today’s thermodynamicists are mulling over the workings of quantum machines. Shrinking technology—a single-ion engine and three-atom fridge were both experimentally realized for the first time within the past year—is forcing them to extend thermodynamics to the quantum realm, where notions like temperature and work lose their usual meanings, and the classical laws don’t necessarily apply.

They’ve found new, quantum versions of the laws that scale up to the originals. Rewriting the theory from the bottom up has led experts to recast its basic concepts in terms of its subjective nature, and to unravel the deep and often surprising relationship between energy and information—the abstract 1s and 0s by which physical states are distinguished and knowledge is measured. “Quantum thermodynamics” is a field in the making, marked by a typical mix of exuberance and confusion.

“We are entering a brave new world of thermodynamics,” said Sandu Popescu, a physicist at the University of Bristol who is one of the leaders of the research effort. “Although it was very good as it started,” he said, referring to classical thermodynamics, “by now we are looking at it in a completely new way.”
Entropy as Uncertainty
In an 1867 letter to his fellow Scotsman Peter Tait, Maxwell described his now-famous paradox hinting at the connection between thermodynamics and information. The paradox concerned the second law of thermodynamics—the rule that entropy always increases— which Sir Arthur Eddington would later say “holds the supreme position among the laws of nature.” According to the second law, energy becomes ever more disordered and less useful as it spreads to colder bodies from hotter ones and differences in temperature diminish. (Recall Carnot’s discovery that you need a hot body and a cold body to do work.) Fires die out, cups of coffee cool and the universe rushes toward a state of uniform temperature known as “heat death,” after which no more work can be done.

The great Austrian physicist Ludwig Boltzmann showed that energy disperses, and entropy increases, as a simple matter of statistics: There are many more ways for energy to be spread among the particles in a system than concentrated in a few, so as particles move around and interact, they naturally tend toward states in which their energy is increasingly shared.

But Maxwell’s letter described a thought experiment in which an enlightened being—later called Maxwell’s demon—uses its knowledge to lower entropy and violate the second law. The demon knows the positions and velocities of every molecule in a container of gas. By partitioning the container and opening and closing a small door between the two chambers, the demon lets only fast-moving molecules enter one side, while allowing only slow molecules to go the other way. The demon’s actions divide the gas into hot and cold, concentrating its energy and lowering its overall entropy. The once useless gas can now be put to work.


Maxwell and others wondered how a law of nature could depend on one’s knowledge—or ignorance—of the positions and velocities of molecules. If the second law of thermodynamics depends subjectively on one’s information, in what sense is it true?

A century later, the American physicist Charles Bennett, building on work by Leo Szilard and Rolf Landauer, resolved the paradox by formally linking thermodynamics to the young science of information. Bennett argued that the demon’s knowledge is stored in its memory, and memory has to be cleaned, which takes work. (In 1961, Landauer calculated that at room temperature, it takes at least 2.9 zeptojoules of energy for a computer to erase one bit of stored information.) In other words, as the demon organizes the gas into hot and cold and lowers the gas’s entropy, its brain burns energy and generates more than enough entropy to compensate. The overall entropy of the gas-demon system increases, satisfying the second law of thermodynamics.

The findings revealed that, as Landauer put it, “Information is physical.” The more information you have, the more work you can extract. Maxwell’s demon can wring work out of a single-temperature gas because it has far more information than the average user.

But it took another half century and the rise of quantum information theory, a field born in pursuit of the quantum computer, for physicists to fully explore the startling implications.

Over the past decade, Popescu and his Bristol colleagues, along with other groups, have argued that energy spreads to cold objects from hot ones because of the way information spreads between particles. According to quantum theory, the physical properties of particles are probabilistic; instead of being representable as 1 or 0, they can have some probability of being 1 and some probability of being 0 at the same time. When particles interact, they can also become entangled, joining together the probability distributions that describe both of their states. A central pillar of quantum theory is that the information—the probabilistic 1s and 0s representing particles’ states—is never lost. (The present state of the universe preserves all information about the past.)

MORE QUANTA
How Life (and Death) Spring From Disorder
PHILIP BALL
How Life (and Death) Spring From Disorder
Quantum Gravity Research Could Unearth the True Nature of Time
NATALIE WOLCHOVER
Quantum Gravity Research Could Unearth the True Nature of Time
The Man Who's Trying to Kill Dark Matter
NATALIE WOLCHOVER
The Man Who’s Trying to Kill Dark Matter
Over time, however, as particles interact and become increasingly entangled, information about their individual states spreads and becomes shuffled and shared among more and more particles. Popescu and his colleagues believe that the arrow of increasing quantum entanglement underlies the expected rise in entropy—the thermodynamic arrow of time. A cup of coffee cools to room temperature, they explain, because as coffee molecules collide with air molecules, the information that encodes their energy leaks out and is shared by the surrounding air.

Understanding entropy as a subjective measure allows the universe as a whole to evolve without ever losing information. Even as parts of the universe, such as coffee, engines and people, experience rising entropy as their quantum information dilutes, the global entropy of the universe stays forever zero.

Renato Renner, a professor at ETH Zurich in Switzerland, described this as a radical shift in perspective. Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”

Moreover, the idea that energy has two forms, useless heat and useful work, “made sense for steam engines,” Renner said. “In the new way, there is a whole spectrum in between—energy about which we have partial information.”

Entropy and thermodynamics are “much less of a mystery in this new view,” he said. “That’s why people like the new view better than the old one.”
Thermodynamics From Symmetry
The relationship among information, energy and other “conserved quantities,” which can change hands but never be destroyed, took a new turn in two papers published simultaneously last July in Nature Communications, one by the Bristol team and another by a team that included Jonathan Oppenheim at University College London. Both groups conceived of a hypothetical quantum system that uses information as a sort of currency for trading between the other, more material resources.

Imagine a vast container, or reservoir, of particles that possess both energy and angular momentum (they’re both moving around and spinning). This reservoir is connected to both a weight, which takes energy to lift, and a turning turntable, which takes angular momentum to speed up or slow down. Normally, a single reservoir can’t do any work—this goes back to Carnot’s discovery about the need for hot and cold reservoirs. But the researchers found that a reservoir containing multiple conserved quantities follows different rules. “If you have two different physical quantities that are conserved, like energy and angular momentum,” Popescu said, “as long as you have a bath that contains both of them, then you can trade one for another.”

In the hypothetical weight-reservoir-turntable system, the weight can be lifted as the turntable slows down, or, conversely, lowering the weight causes the turntable to spin faster. The researchers found that the quantum information describing the particles’ energy and spin states can act as a kind of currency that enables trading between the reservoir’s energy and angular momentum supplies. The notion that conserved quantities can be traded for one another in quantum systems is brand new. It may suggest the need for a more complete thermodynamic theory that would describe not only the flow of energy, but also the interplay between all the conserved quantities in the universe.


The fact that energy has dominated the thermodynamics story up to now might be circumstantial rather than profound, Oppenheim said. Carnot and his successors might have developed a thermodynamic theory governing the flow of, say, angular momentum to go with their engine theory, if only there had been a need. “We have energy sources all around us that we want to extract and use,” Oppenheim said. “It happens to be the case that we don’t have big angular momentum heat baths around us. We don’t come across huge gyroscopes.”

Popescu, who won a Dirac Medal last year for his insights in quantum information theory and quantum foundations, said he and his collaborators work by “pushing quantum mechanics into a corner,” gathering at a blackboard and reasoning their way to a new insight after which it’s easy to derive the associated equations. Some realizations are in the process of crystalizing. In one of several phone conversations in March, Popescu discussed a new thought experiment that illustrates a distinction between information and other conserved quantities—and indicates how symmetries in nature might set them apart.

“Suppose that you and I are living on different planets in remote galaxies,” he said, and suppose that he, Popescu, wants to communicate where you should look to find his planet. The only problem is, this is physically impossible: “I can send you the story of Hamlet. But I cannot indicate for you a direction.”

There’s no way to express in a string of pure, directionless 1s and 0s which way to look to find each other’s galaxies because “nature doesn’t provide us with [a reference frame] that is universal,” Popescu said. If it did—if, for instance, tiny arrows were sewn everywhere in the fabric of the universe, indicating its direction of motion—this would violate “rotational invariance,” a symmetry of the universe. Turntables would start turning faster when aligned with the universe’s motion, and angular momentum would not appear to be conserved. The early-20th-century mathematician Emmy Noether showed that every symmetry comes with a conservation law: The rotational symmetry of the universe reflects the preservation of a quantity we call angular momentum. Popescu’s thought experiment suggests that the impossibility of expressing spatial direction with information “may be related to the conservation law,” he said.

The seeming inability to express everything about the universe in terms of information could be relevant to the search for a more fundamental description of nature. In recent years, many theorists have come to believe that space-time, the bendy fabric of the universe, and the matter and energy within it might be a hologram that arises from a network of entangled quantum information. “One has to be careful,” Oppenheim said, “because information does behave differently than other physical properties, like space-time.”

Knowing the logical links between the concepts could also help physicists reason their way inside black holes, mysterious space-time swallowing objects that are known to have temperatures and entropies, and which somehow radiate information. “One of the most important aspects of the black hole is its thermodynamics,” Popescu said. “But the type of thermodynamics that they discuss in the black holes, because it’s such a complicated subject, is still more of a traditional type. We are developing a completely novel view on thermodynamics.” It’s “inevitable,” he said, “that these new tools that we are developing will then come back and be used in the black hole.”

Janet Anders (lower right) at a 160-person conference on quantum thermodynamics held at the University of Oxford in March.
Janet Anders (lower right) at a 160-person conference on quantum thermodynamics held at the University of Oxford in March.LUIS CORREA
What to Tell Technologists
Janet Anders, a quantum information scientist at the University of Exeter, takes a technology-driven approach to understanding quantum thermodynamics. “If we go further and further down [in scale], we’re going to hit a region that we don’t have a good theory for,” Anders said. “And the question is, what do we need to know about this region to tell technologists?”

In 2012, Anders conceived of and co-founded a European research network devoted to quantum thermodynamics that now has 300 members. With her colleagues in the network, she hopes to discover the rules governing the quantum transitions of quantum engines and fridges, which could someday drive or cool computers or be used in solar panels, bioengineering and other applications. Already, researchers are getting a better sense of what quantum engines might be capable of. In 2015, Raam Uzdin and colleagues at the Hebrew University of Jerusalem calculated that quantum engines can outpower classical engines. These probabilistic engines still follow Carnot’s efficiency formula in terms of how much work they can derive from energy passing between hot and cold bodies. But they’re sometimes able to extract the work much more quickly, giving them more power. An engine made of a single ion was experimentally demonstrated and reported in Science in April 2016, though it didn’t harness the power-enhancing quantum effect.

Popescu, Oppenheim, Renner and their cohorts are also pursuing more concrete discoveries. In March, Oppenheim and his former student, Lluis Masanes, published a paper deriving the third law of thermodynamics—a historically confusing statement about the impossibility of reaching absolute-zero temperature—using quantum information theory. They showed that the “cooling speed limit” preventing you from reaching absolute zero arises from the limit on how fast information can be pumped out of the particles in a finite-size object. The speed limit might be relevant to the cooling abilities of quantum fridges, like the one reported in a preprint in February. In 2015, Oppenheim and other collaborators showed that the second law of thermodynamics is replaced, on quantum scales, by a panoply of second “laws”—constraints on how the probability distributions defining the physical states of particles evolve, including in quantum engines.


As the field of quantum thermodynamics grows quickly, spawning a range of approaches and findings, some traditional thermodynamicists see a mess. Peter Hänggi, a vocal critic at the University of Augsburg in Germany, thinks the importance of information is being oversold by ex-practitioners of quantum computing, who he says mistake the universe for a giant quantum information processor instead of a physical thing. He accuses quantum information theorists of confusing different kinds of entropy—the thermodynamic and information-theoretic kinds—and using the latter in domains where it doesn’t apply. Maxwell’s demon “gets on my nerves,” Hänggi said. When asked about Oppenheim and company’s second “laws” of thermodynamics, he said, “You see why my blood pressure rises.”

While Hänggi is seen as too old-fashioned in his critique (quantum-information theorists do study the connections between thermodynamic and information-theoretic entropy), other thermodynamicists said he makes some valid points. For instance, when quantum information theorists conjure up abstract quantum machines and see if they can get work out of them, they sometimes sidestep the question of how, exactly, you extract work from a quantum system, given that measuring it destroys its simultaneous quantum probabilities. Anders and her collaborators have recently begun addressing this issue with new ideas about quantum work extraction and storage. But the theoretical literature is all over the place.

“Many exciting things have been thrown on the table, a bit in disorder; we need to put them in order,” said Valerio Scarani, a quantum information theorist and thermodynamicist at the National University of Singapore who was part of the team that reported the quantum fridge. “We need a bit of synthesis. We need to understand your idea fits there; mine fits here. We have eight definitions of work; maybe we should try to figure out which one is correct in which situation, not just come up with a ninth definition of work.”

Oppenheim and Popescu fully agree with Hänggi that there’s a risk of downplaying the universe’s physicality. “I’m wary of information theorists who believe everything is information,” Oppenheim said. “When the steam engine was being developed and thermodynamics was in full swing, there were people positing that the universe was just a big steam engine.” In reality, he said, “it’s much messier than that.” What he likes about quantum thermodynamics is that “you have these two fundamental quantities—energy and quantum information—and these two things meet together. That to me is what makes it such a beautiful theory.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
(FULL STORY)

Alien Civilizations May Number In The Trillions, New Study Says
[5/19/2017]
The possibility that we earthlings are not truly alone in the universe has gained some added credibility, thanks to a new study that coincides with NASA’s recent planetary discoveries. The research, published in the journal Astrobiology last week, suggests that more planets in the Milky Way galaxy may harbor advanced civilizations than we previously imagined.

Study co-authors Adam Frank and Woodruff Sullivan looked at recent discoveries of potentially habitable exoplanets and considered the odds of whether sophisticated civilizations existed on them in the past or present.

“What we showed was the ‘floor’ on the probability for a civilization to form on any randomly chosen planet,” Frank, a University of Rochester physics and astronomy professor, told The Huffington Post in an email. “If we are the only civilization in cosmic history, then that what we calculated is the actual probability nature has set. But if the actual probability is higher than that floor, then civilizations have happened before.”

Frank says the potential number of planets orbiting their parent stars within a habitable distance is staggering.

“Even if you are pretty pessimistic and think that you’d have to search through 100 billion (habitable zone) planets before you found one where a civilization developed, then there have still been a trillion civilizations over cosmic history!” Frank wrote. “When I think about that, my mind reels — even if there is just a one in a 100 billion chance of evolution creating exo-civilizations, the universe still has made so many of them that we are swamped by histories other than our own.”


NASA/W. STENZEL
An artist’s depiction of planetary discoveries by NASA’s Kepler spacecraft, which searches for Earth-like planets. The Kepler telescope has discovered thousands of verified planets since it launched in 2009.
In 1961, astronomer Frank Drake — founder of the SETI Institute (SETI stands for “Search for Extraterrestrial Intelligence”) — devised what is now known as the “Drake equation” to estimate the number of planets that may be home to civilizations with the ability to communicate beyond their world.

Frank and Sullivan created a new equation, which appears at the bottom of the illustration below. While the Drake equation calculates the number of advanced alien civilizations that could exist in the Milky Way galaxy, Frank and Sullivan’s equation expands the question to calculate the number of advanced civilizations that have existed in our galaxy throughout the whole history of the universe.


UNIVERSITY OF ROCHESTER
Two equations consider the possibilities of technological alien civilizations in the Milky Way galaxy: At top, the 1961 Drake equation and, at bottom, a more recent equation by Adam Frank and Woodruff Sullivan.
The variable factors that Drake and others consider when attempting to come up with figures about ET-inhabited worlds include:

The rate of formation of stars with planets suitable for intelligent life.
The number of those stars that have planetary systems.
The number of those planets which may have life-sustaining environments.
The number of those planets where life develops.
How many of those planets produce intelligent life.
How many of those intelligent life forms could produce technology, such as radio signals.
In their Astrobiology paper, Frank and Sullivan write:

“Recent advances in exoplanet studies provide strong constraints on all astrophysical terms in the Drake equation. We set a firm lower bound on the probability that one or more technological species have evolved anywhere and at any time in the history of the observable universe.”

The two scientists address what they refer to as “the cosmic frequency of technological species.”

“The universe is more than 13 billion years old,” Sullivan, of the astronomy department and astrobiology program at the University of Washington, said in a statement. “That means that even if there have been 1,000 civilizations in our own galaxy, if they live only as long as we have been around — roughly 10,000 years — then all of them are likely already extinct. And others won’t evolve until we are long gone.

“For us to have much chance in finding another ‘contemporary’ active technological civilization, on average they must last much longer than our present lifetime,” Sullivan said.

The search for extraterrestrial signals has been ongoing for decades.

“With so many stars and planets filling the cosmos, it boggles the mind to think that we’re the only clever life to have made an appearance,” SETI Institute senior astronomer Seth Shostak told HuffPost in an email. “Frank and Sullivan use new research indicating that roughly one in five stars is orbited by a planet that could nurture biology. After that, it’s just a matter of counting up the tally of stars in the visible universe, and saying that — with all the suitable real estate that’s out there, if we’re the only place with intelligent life, then we’ve really won the mother of all lotteries.”

Shostak cautions against being overly optimistic or pessimistic about the SETI Institute’s searches for intelligent signals from possible outer space neighbors.

“The odds that no one is out there are very, very small. It’s a bit like an ant coming out of its hive, seeing the enormous amount of real estate stretching in all directions and deciding that, if its home is the only ant hill, then its existence is a near-miracle. Or, put another way, the calculation by Frank and Sullivan quantifies Jodie Foster’s statement in [the movie] ‘Contact’ that, if there’s nobody out there, it would be a ‘waste of space,’” said Shostak.

With all the suitable real estate that’s out there, if we’re the only place with intelligent life, then we’ve really won the mother of all lotteries.”
Seth Shostak, SETI Institute senior astronomer
Scientists searching for extraterrestrial beings — and, yes, to those beings, we would be aliens — are like archaeologists combing a vast space for treasures and information to learn more about the history of our species.

“I love the notion of a cosmic archaeological question. I think this puts an important new spin on the question about the rise of technological communicating intelligence,” Penelope J. Boston, incoming director of NASA’s Astrobiology Institute at Ames Research Center, told HuffPost.

“We have only been looking for other intelligences for a few decades in a galaxy of unfathomable proportions,” Boston said. “Of course we haven’t found anybody yet. I think it is childish to imagine that we should somehow have started looking, and bingo, there they are! I have trouble finding my dropped contact lens in the grass. Should I then disbelieve in the reality of my contact lens?”

While scientists had long wondered if there were other planets orbiting stars in the Milky Way galaxy and elsewhere, it wasn’t until the early 1990s that the first extrasolar world was confirmed.

“The existence of planets orbiting stars other than the sun is a 2,500-year-old question that has been entirely answered over the last 20 years,” said Frank. “We now know that every star in the night sky has at least one planet orbiting it, and many of those are in the right place for life to form.

“Ten thousand years from now, no one will remember anything about our era except it was when we discovered this single profound fact: We live in a cosmos of planets.”
(FULL STORY)

New blackbody force depends on spacetime geometry and topology
[5/23/2017]
In 2013, a group of physicists from Austria proposed the existence of a new and unusual force called the "blackbody force." Blackbodies—objects that absorb all incoming light and therefore appear black at room temperature—have long been known to emit blackbody radiation, which repels small nearby objects such as atoms and molecules. But the physicists showed that blackbodies theoretically also exert an attractive force on these objects. They called this force the "blackbody force," and showed that it can be stronger than blackbody radiation, and—for very small particles—even stronger than gravity.

Now in a new study published in EPL, a different team of physicists, C.R. Muniz et al., at Ceará State University and the Federal University of Ceará, Brazil, have theoretically demonstrated that the blackbody force depends not only on the geometry of the bodies themselves, but also on both the surrounding spacetime geometry and topology. In some cases, accounting for these latter factors significantly increases the strength of the blackbody force. The results have implications for a variety of astrophysics scenarios, such as planet and star formation, and possibly lab-based experiments.
"This work puts the blackbody force discovered in 2013 in a wider context, which involves strong gravitational sources and exotic objects like cosmic strings as well as the more prosaic ones found in condensed matter," Muniz told Phys.org.
As the scientists showed in 2013, the blackbody force arises when the heat absorbed by a blackbody causes the blackbody to emit electromagnetic waves that shift the atomic energy levels of nearby atoms and molecules. These shifts cause the atoms and molecules to be attracted to the blackbodies due to their high radiation intensity, pulling them together.
In the new study, the physicists investigated spherical blackbodies and cylindrical blackbodies, and showed how the topology and the local curvature of the spacetime influences their blackbody forces. They showed that ultradense spherical blackbodies like a neutron star (around which spacetime is highly curved) generate a stronger blackbody force due to the curvature compared to blackbodies in flat spacetime. They explain that this is because gravity modifies both the temperature of the blackbody and the solid angle at which the nearby atoms and molecules "see" the blackbody. On the other hand, a less dense blackbody such as our Sun (where spacetime is less curved) generates a blackbody force that is very similar to that of the flat case.
The researchers then considered the case of a global monopole, a spherical object that modifies the global properties of space, and found a different kind of influence. Whereas for other spherical blackbodies, the spacetime influence is gravitational and decreases with the distance to the blackbody, for the global monopole the influence is of a topological nature, decreasing with the distance but eventually reaching a constant value.
Finally, when investigating the blackbody force of cylindrical blackbodies around which spacetime is locally flat, the scientists found no gravitational correction to the temperature, but, surprisingly, an effect on the angles with nearby objects. And when a cylindrical blackbody becomes infinitely thin, turning into a hypothetical cosmic string, the blackbody force vanishes completely. Overall, the scientists expect that these newly discovered geometrical and topological influences on the blackbody force will help elucidate the role of this unusual force on objects throughout the universe.
"We think that the intensification of the blackbody force due to the ultradense sources can influence in a detectable way the phenomena associated with them, such as the emission of very energetic particles, and the formation of accretion discs around black holes," Muniz said. "That force can also help to detect the Hawking radiation emitted by these latter objects, since we know that such radiation obeys the blackbody spectrum. In the future, we would like to investigate the behavior of that force in other spacetimes, as well as the influence of extra dimensions on it."
Explore further: Blackbody radiation induces attractive force stronger than gravity
More information: C. R. Muniz et al. "Dependence of the black-body force on spacetime geometry and topology." EPL. DOI: 10.1209/0295-5075/117/60001


Read more at: https://phys.org/news/2017-05-blackbody-spacetime-geometry-topology.html#jCp
(FULL STORY)

Gravitational Waves Could Help Us Detect the Universe’s Hidden Dimensions
[5/5/2017]
Gravitational waves might be used to uncover hidden dimensions in the universe. By looking at these ripples in spacetime, researchers at the Max Planck Institute for Gravitational Physics in Germany say we could work out what impact hidden dimensions would have on them, and use this information to find these effects.

The discovery of gravitational waves was announced in February 2016. Scientists used the Laser Interferometer Gravitational-wave Observatory (LIGO) detectors to find fluctuations in spacetime created by a pair of colliding black holes. Scientists can now use this information to see the universe in a whole new way—potentially even one day tracing waves that came from the Big Bang.

At present, our models of the universe are incomplete. They cannot explain many of the things we observe in the universe, so many physicists believe we are missing something—and that something could be the presence of extra dimensions.

If scientists were to find evidence of extra dimensions, they could start answering some of the most fundamental unknowns of the universe, like what dark matter is and why the universe is expanding at an accelerating rate.

Gravitational waves are ripples in spacetime caused by extremely energetic events. These events, like merging black holes, would release so much energy they would disrupt the way spacetime moves, creating ‘waves’ that would propagate out from the source—similar to the way a pebble thrown into a pond creates ripples moving outwards.

Gravitational waves were first predicted by Albert Einstein over 100 years ago, but until now we have not been able to find them. By the time the ripples reach us, they are so tiny that detecting them requires hugely sensitive equipment. This is what LIGO was able to do.

In the latest study, which appears on the preprint server arxiv.org, David Andriot and Gustavo Lucena Gómez look at how gravitational waves move through the known dimensions—three representing space and another for time. They then investigate what effects extra dimensions might have on the four dimensional waves we see.

“If there are extra dimensions in the universe, then gravitational waves can walk along any dimension, even the extra dimensions,” Lucena Gómez told New Scientist.

They found extra dimensions could have two effects on gravitational waves—firstly, they would have what they call a “breathing mode.” This provides another way for the gravitational waves to move in space.

“The breathing mode deforms the space in a specific manner, giving a distinct signature,” they wrote. To observe this change, they would need three detectors like LIGO all working to observe the same thing at the same time—something that “should be available in a near future,” they wrote.

The second effect is a “massive tower” of extra gravitational waves. These waves could be detected at high frequencies, something our current technologies prevent. To detect changes at the frequencies they propose, LIGO would need to be thousands of times more sensitive.

The scientists are clear that such apparatus does not exist, but note: “If such a detector were available, however, one could hope for a very clean signal, since there is no known astrophysical process emitting gravitational waves with frequencies much greater than 103Hz. Such high frequencies may thus be clear symptoms of new physics.”

However, Bobby Acharya, professor of Theoretical Physics at King’s College London, U.K., who was not involved in the study, is not convinced by the findings. In an interview with Newsweek , he says that while he firmly believes in the existence of extra dimensions, models suggest these dimensions would be extremely small: “That means that in order to excite them and create waves in those extra dimensions you require a lot of energy,” he says.

“And if you did produce the gravitational wave that propagated in the extra dimensions, the fact that extra dimensions are so small it means the frequency of this gravitational wave will be very high—much higher than the LIGO gravitational wave detectors can detect.”

He said you would need a “very optimistic point of view” to try to detect gravitational waves propagating in extra dimensions: “[The extra dimensions] would have to be rather large and then it would be difficult to make the model consistent with other observations. I’m not so positive about the result.”
(FULL STORY)

We could detect alien life by finding complex molecules
[4/27/2017]
By Bob Holmes in Mesa, Arizona

How can we search for life on other planets when we don’t know what it might look like? One chemist thinks he has found an easy answer: just look for sophisticated molecular structures, no matter what they’re made of. The strategy could provide a simple way for upcoming space missions to broaden the hunt.

Until now, the search for traces of life, or biosignatures, on other planets has tended to focus mostly on molecules like those used by earthly life. Thus, Mars missions look for organic molecules, and future missions to Europa may look for amino acids, unequal proportions of mirror-image molecules, and unusual ratios of carbon isotopes, all of which are signatures of life here on Earth.

But if alien life is very different, it may not show any of these. “I think there’s a real possibility we could miss life if [resembling Earth life is] the only criterion,” says Mary Voytek, who heads NASA’s astrobiology programme.

Now Lee Cronin, a chemist at the University of Glasgow, UK, argues that complexity could be a biosignature that doesn’t depend on any assumptions about the life forms that produce it. “Biology has one signature: the ability to produce complex things that could not arise in the natural environment,” Cronin says.

Obviously, an aircraft or a mobile phone could not assemble spontaneously, so their existence points to a living – and even intelligent – being that built them. But simpler things like proteins, DNA molecules or steroid hormones are also highly unlikely to occur without being assembled by a living organism, Cronin says.

Step by step
Cronin has developed a way to measure the complexity of a molecule by counting the number of unique steps – adding chemical side groups or ring structures, for example – needed for its formation, without double-counting repeated steps. To draw an analogy, his metric would score the words “bana” and “banana” as equally complex, since once you can make one “na” it is trivial to add a second one.

Any structure requiring more than about 15 steps is so complex it must be biological in origin, he said this week at the Astrobiology Science Conference in Mesa, Arizona.

Cronin thinks he may be able to make that criterion simpler still, by specifying a maximum molecular weight for compounds that can assemble spontaneously.

Astrobiologists welcome Cronin’s suggestion. “I appreciate Lee for developing a biosignature that has minimal assumptions about the biology,” says Voytek.

In practice, though, Voytek notes that a detector compact enough to travel on an interplanetary mission would probably need to be designed to look for carbon-based life.

And even if Cronin’s method works, no scientist would risk claiming to have found extraterrestrial life on the basis of just one line of evidence, says Kevin Hand of NASA’s Jet Propulsion Laboratory and project scientist for the Europa Lander mission now being developed by NASA. That means that future missions will still need to look for multiple biosignatures.
(FULL STORY)

We May Have Uncovered the First Ever Evidence of the Multiverse
[4/27/2017]
TOO COLD

For years, scientists have been baffled by a weird anomaly far away in space: a mysterious “Cold Spot” about 1.8 billion light-years across. It is cooler than its surroundings by around 0.00015 degrees Celsius (0.00027 degrees Fahrenheit), a fact astronomers discovered by measuring background radiation throughout the universe.

Previously, astronomers believed that this space could be cooler simply because it had less matter in it than most sections of space. They dubbed it a massive supervoid and estimated that it had 10,000 galaxies fewer than other comparable sections of space.

But now, in a recently published survey of galaxies, astronomers from the Royal Astronomical Society (RAS) say they have discovered that this supervoide could not exist. They now believe that the galaxies in the cold spot are just clustered around smaller voids that populate the cold spot like bubbles. These small voids, however, cannot explain the temperature difference observed.

MULTIVERSE?

To link the temperature differences to the smaller voids, the researchers say a non-standard cosmological model would be required. “But our data place powerful constraints on any attempt to do that,” explained researcher Ruari Mackenzie in an RAS press release. While the study had a large margin of error, the simulations suggest there is only a two percent probability that the Cold Spot formed randomly.

“This means we can’t entirely rule out that the Spot is caused by an unlikely fluctuation explained by the standard model. But if that isn’t the answer, then there are more exotic explanations,” said researcher Tom Shanks in the press release. “Perhaps the most exciting of these is that the Cold Spot was caused by a collision between our universe and another bubble universe.”

If more detailed studies support the findings of this research, the Cold Spot might turn out to be the first evidence for the multiverse, though far more evidence would be needed to confirm our universe is indeed one of many.
(FULL STORY)

Scientists Just Discovered an Alien Planet That’s The Best Candidate for Life As We Know It
[4/18/2017]
SUPER-EARTH LHS 1140B

Only a few decades ago, the thought of any alien planets existing in the reaches of space were just hypothetical ideas. Now, we know of thousands of such planets – and today, scientists may have discovered the best candidate yet for alien life.

That candidate is an exoplanet orbiting a red dwarf star 40 light-years from Earth—what the international team of astronomers who discovered it have deemed a “super-Earth.” Using ESO’s HARPS instrument and a range of telescopes around the world, the astronomers located the exoplanet orbiting the dim star – LHS 1140 – within its habitable zone. This world passes in front of its parent stars as it orbits, has likely retained most of its atmosphere, and is a little larger and much more massive than the Earth. In short, super-Earth LHS 1140b is among the most exciting known subjects for atmospheric studies.

Other Earths: The Best Exoplanet Candidates for Life [INFOGRAPHIC]
Click to View Full Infographic
Although the faint red dwarf star LHS 1140b is ten times closer to its star than the Earth is to the Sun, because red dwarfs are much smaller and cooler than the Sun is, the super-Earth lies in the middle of the habitable zone and receives around half as much sunlight from its star as the Earth does.

“This is the most exciting exoplanet I’ve seen in the past decade,” lead author Jason Dittmann of the Harvard-Smithsonian Center for Astrophysics said in an ESO science release. “We could hardly hope for a better target to perform one of the biggest quests in science — searching for evidence of life beyond Earth.”

*5* Scientists Just Discovered an Alien Planet That’s The Best Candidate for Life As We Know It
Artist’s impression of the super-Earth exoplanet LHS 1140b. Credit: ESO
LIFE AS WE KNOW IT

To support life as we know it, a planet must retain an atmosphere and have liquid surface water. When red dwarf stars are young, they emit radiation that can damage the atmospheres of planets around them. This planet’s large size indicates that a magma ocean may have existed on its surface for eons, feeding steam into the atmosphere and replenishing the planet with water until well within the time the star had cooled to its current, steady glow. The astronomers estimate the planet is at least five billion years old, and deduce that it has a diameter of almost 18,000 kilometers (11,185 mi)— 1.4 times larger than that of the Earth. Its greater mass and density implies that it is probably made of rock with a dense iron core.

Two of the European members of the team, Xavier Delfosse and Xavier Bonfils, stated in the release: “The LHS 1140 system might prove to be an even more important target for the future characterization of planets in the habitable zone than Proxima b or TRAPPIST-1. This has been a remarkable year for exoplanet discoveries!”

Scientists expect observations with the Hubble Space Telescope will soon allow them to assess how much high-energy radiation the exoplanet receives, and further into the future — with the help of new telescopes like ESO’s Extremely Large Telescope and the James Webb Telescope — detailed observations of the atmospheres of exoplanets will be possible.
(FULL STORY)

Physicists detect whiff of new particle at the Large Hadron Collider
[4/18/2017]
For decades, particle physicists have yearned for physics beyond their tried-and-true standard model. Now, they are finding signs of something unexpected at the Large Hadron Collider (LHC), the world’s biggest atom smasher at CERN, the European particle physics laboratory near Geneva, Switzerland. The hints come not from the LHC’s two large detectors, which have yielded no new particles since they bagged the last missing piece of the standard model, the Higgs boson, in 2012, but from a smaller detector, called LHCb, that precisely measures the decays of familiar particles.

The latest signal involves deviations in the decays of particles called B mesons—weak evidence on its own. But together with other hints, it could point to new particles lying on the high-energy horizon. “This has never happened before, to observe a set of coherent deviations that could be explained in a very economical way with one single new physics contribution,” says Joaquim Matias, a theorist at the Autonomous University of Barcelona in Spain. Matias says the evidence is strong enough for a discovery claim, but others urge caution.

The LHC smashes protons together at unprecedented energy to try to blast into existence massive new particles, which its two big detectors, ATLAS and CMS, would spot. LHCb focuses on familiar particles, in particular B mesons, using an exquisitely sensitive tracking detector to sniff out the tiny explosive decays.
SIGN UP FOR OUR DAILY NEWSLETTER

Get more great content like this delivered right to you!


B mesons are made of fundamental particles called quarks. Familiar protons and neutrons are made of two flavors of quarks, up and down, bound in trios. Heavier quark flavors—charm, strange, top, and bottom—can be created, along with their antimatter counterparts, in high-energy particle collisions; they pair with antiquarks to form mesons.

Lasting only a thousandth of a nanosecond, B mesons potentially provide a window onto new physics. Thanks to quantum uncertainty, their interiors roil with particles that flit in and out of existence and can affect how they decay. Any new particles tickling the innards of B mesons—even ones too massive for the LHC to create—could cause the rates and details of those decays to deviate from predictions in the standard model. It’s an indirect method of hunting new particles with a proven track record. In the 1970s, when only the up, down, and strange quarks were known, physicists predicted the existence of the charm quark by discovering oddities in the decays of K mesons (a family of mesons all containing a strange quark bound to an antiquark).

In their latest result, reported today in a talk at CERN, LHCb physicists find that when one type of B meson decays into a K meson, its byproducts are skewed: The decay produces a muon (a cousin of the electron) and an antimuon less often than it makes an electron and a positron. In the standard model, those rates should be equal, says Guy Wilkinson, a physicist at the University of Oxford in the United Kingdom and spokesperson for the 770-member LHCb team. “This measurement is of particular interest because theoretically it’s very, very clean,” he says.

Strangely familiar
A new process appears to be modifying one of the standard ways a B meson decays to a K meson. It may involve a new force-carrying particle called a Z' that avoids creating a short-lived top quark.
Standard model decay
b
d

s
d

B meson
K meson
Muon, µ+
Antimuon, µ–
Possible new decay
µ
+
µ

B meson
K meson
b
d

s
d

t
Charged weak force boson, W–
Neutral weak force boson, Z
Possible new particle, Z'
Bottom quark
Strange quark
Top quark
Anti-down quark
V. ALTOUNIAN/SCIENCE
The result is just one of half a dozen faint clues LHCb physicists have found that all seem to jibe. For example, in 2013, they examined the angles at which particles emerge in such B meson decays and found that they didn’t quite agree with predictions.

What all those anomalies point to is less certain. Within the standard model, a B meson decays to a K meson only through a complicated “loop” process in which the bottom quark briefly turns into a top quark before becoming a strange quark. To do that, it has to emit and reabsorb a W boson, a “force particle” that conveys the weak force (see graphic, previous page).

The new data suggest the bottom quark might morph directly into a strange quark—a change the standard model forbids—by spitting out a new particle called a Z′ boson. That hypothetical cousin of the Z boson would be the first particle beyond the standard model and would add a new force to theory. The extra decay process would lower production of muons, explaining the anomaly. “It sort of an ad hoc construct, but it fits the data beautifully,” says Wolfgang Altmannshofer, a theorist at the University of Cincinnati in Ohio. Others have proposed that a quark–electron hybrid called a leptoquark might briefly materialize in the loop process and provide another way to explain the discrepancies.

Of course, the case for new physics could be a mirage of statistical fluctuations. Physicists with ATLAS and CMS 18 months ago reported hints of a hugely massive new particle only to see them fade away with more data. The current signs are about as strong as those were, Altmannshofer says.

The fact that physicists are using LHCb to search in the weeds for signs of something new underscores the fact that the LHC hasn’t yet lived up to its promise. “ATLAS and CMS were the detectors that were going to discover new things, and LHCb was going to be more complementary,” Matias says. “But things go as they go.”

If the Z′ or leptoquarks exist, then the LHC might have a chance to blast them into bona fide, albeit fleeting, existence, Matias says. The LHC is now revving up after its winter shutdown. Next month, the particle hunters will return to their quest.
(FULL STORY)

Physicists Discover Hidden Aspects of Electrodynamics
[4/11/2017]
BATON ROUGE – Radio waves, microwaves and even light itself are all made of electric and magnetic fields. The classical theory of electromagnetism was completed in the 1860s by James Clerk Maxwell. At the time, Maxwell’s theory was revolutionary, and provided a unified framework to understand electricity, magnetism and optics. Now, new research led by LSU Department of Physics & Astronomy Assistant Professor Ivan Agullo, with colleagues from the Universidad de Valencia, Spain, advances knowledge of this theory. Their recent discoveries have been published in Physical Review Letters.

Maxwell’s theory displays a remarkable feature: it remains unaltered under the interchange of the electric and magnetic fields, when charges and currents are not present. This symmetry is called the electric-magnetic duality.

However, while electric charges exist, magnetic charges have never been observed in nature. If magnetic charges do not exist, the symmetry also cannot exist. This mystery has motivated physicists to search for magnetic charges, or magnetic monopoles. However, no one has been successful. Agullo and his colleagues may have discovered why.

“Gravity spoils the symmetry regardless of whether magnetic monopoles exist or not. This is shocking. The bottom line is that the symmetry cannot exist in our universe at the fundamental level because gravity is everywhere,” Agullo said.

Gravity, together with quantum effects, disrupts the electric-magnetic duality or symmetry of the electromagnetic field.

Agullo and his colleagues discovered this by looking at previous theories that illustrate this phenomenon among other types of particles in the universe, called fermions, and applied it to photons in electromagnetic fields.

“We have been able to write the theory of the electromagnetic field in a way that very much resembles the theory of fermions, and prove this absence of symmetry by using powerful techniques that were developed for fermions,” he said.

This new discovery challenges assumptions that could impact other research including the study of the birth of the universe.



The Big Bang

Satellites collect data from the radiation emitted from the Big Bang, which is called the Cosmic Microwave Background, or CMB. This radiation contains valuable information about the history of the universe.

“By measuring the CMB, we get precise information on how the Big Bang happened,” Agullo said.

Scientists analyzing this data have assumed that the polarization of photons in the CMB is not affected by the gravitational field in the universe, which is true only if electromagnetic symmetry exists. However, since this new finding suggests that the symmetry does not exist at the fundamental level, the polarization of the CMB can change throughout cosmic evolution. Scientists may need to take this into consideration when analyzing the data. The focus of Agullo’s current research is on how much this new effect is.

This research is supported by the National Science Foundation grants PHY-1403943 and PHY-1552603.
(FULL STORY)

A dark matter 'bridge' holding galaxies together has been captured for the first time
[4/12/2017]
The first image of a dark matter "bridge", believed to form the links between galaxies, has been captured by astrophysicists in Canada.

Researchers at the University of Waterloo used a technique known as weak gravitational lensing to create a composite image of the bridge. Gravitational lensing is an effect that causes the images of distant galaxies to warp slightly under the influence of an unseen mass, such as a planet, a black hole, or in this case, dark matter.

Their composite image was made up of a combination of combined lensing images taken of more than 23,000 galaxy pairs, spotted 4.5 billion light-years away. This effect was measured from a multi-year sky survey at the Canada-France-Hawaii Telescope.

These results show that the dark matter filament bridge is strongest between systems less than 40 million light years apart, and confirms predictions that galaxies across the Universe are tied together through a cosmic web of the elusive substance.

Dark matter is a mysterious element said to make up around 84 per cent of the Universe. It's known as "dark" because it doesn't shine, absorb or reflect light, which has traditionally made it largely undetectable, except through gravity and gravitational lensing. Evidence for the existence of this form of matter comes, among other things, from the astrophysical observation of galaxies, which rotate far too rapidly to be held together only by the gravitational pull of the visible matter.

Astrophysics has long proposed the Universe's web of stars and galaxies is supported by a "cosmic scaffolding" made up of fine threads of this invisible dark matter. These threadlike strands formed just after the Big Bang when denser portions of the Universe drew in dark matter until it collapsed and formed flat disks, which featured fine filaments of dark matter at their joins. At the cross-section of these filaments, galaxies formed.

READ NEXT
Ligo's next trick? Finally hunting down dark matter

Ligo's next trick? Finally hunting down dark matter
By ABIGAIL BEALL

University of Waterloo
"For decades, researchers have been predicting the existence of dark matter filaments between galaxies that act like a web-like superstructure connecting galaxies together," said Mike Hudson, a professor of astronomy at the University of Waterloo in the journal Monthly Notices of the Royal Astronomical Society. "This image moves us beyond predictions to something we can see and measure."

"By using this technique, we're not only able to see that these dark matter filaments in the Universe exist, we're able to see the extent to which these filaments connect galaxies together," said co-author Seth Epps.

WHAT IS DARK MATTER?
Dark matter is an invisible form of matter which, until now, has only revealed itself through its gravitational effects.
Evidence for the existence of this form of matter comes, among other things, from the astrophysical observation of galaxies, which rotate far too rapidly to be held together only by the gravitational pull of the visible matter.
High-precision measurements using the European satellite Planck show that almost 85 percent of the entire mass of the universe consists of dark matter.
All the stars, planets, nebulae and other objects in space that are made of conventional matter account for no more than 15 percent of the mass of the universe.
The unknown form of matter can either consist of comparatively few, but very heavy particles, or of a large number of light ones.
One of the possible candidates for dark matter is a particle called the axion, first proposed in 1977. It appears in some extensions of the Standard Model of particle physics. Astronomers believe that if axions make up dark matter, they could be detected through gravitational waves. This is because axions accelerated by a black hole would give off gravitational waves, just as electrons give off electromagnetic waves.

As a result, instruments like Ligo – and the upcoming Advanced Ligo (Ligo) –
may be able to see gravitational waves (GWs) from thousands of black hole (BH) mergers which would mark the beginning of a new precision tool for physics.

Physicists have a general idea about what the dark matter particle looks like but are struggling to build a clear picture. They can track the distribution of dark matter throughout the galaxy by examining how galaxies move, but can't pinpoint its exact location or design.

Earlier this year, Priyamvada Natarajan, a professor of astrophysics at Yale University, and her team brought the search for dark matter a step forward by creating the most detailed map of dark matter ever created. The map looks like an alien landscape, with uneven peaks and troughs scattered throughout. There are gentle mounds, on top of which sharp peaks arise, like the inside of a cave covered in stalactites.
(FULL STORY)

No, Dark Energy Isn't An Illusion
[4/10/2017]
In 1998, two teams of scientists announced a shocking discovery: the expansion of the Universe was accelerating. Distant galaxies weren't just receding from us, but their recession speed was increasing over time. Over the next few years, precision measurements of three independent quantities -- distant galaxies containing type Ia supernovae, the fluctuation pattern in the cosmic microwave background, and large-scale correlations between galaxies at a variety of distances -- all supported and confirmed this picture. The leading explanation? That there's a new form of energy inherent to space itself: dark energy. The case is so strong that no one reasonably doubts the evidence, but many teams have made alternative cases for the explanation, claiming that dark energy itself could be an illusion.
To understand whether this could be the case, we need to walk through four straightforward steps:

What a Universe without dark energy would look like, What our Universe actually looks like, What alternative explanations have been offered up, And to evaluate whether any of them could legitimately work? In science, as in all things, it's pretty easy to offer a "what if..." alternative scenario to the leading idea. But can it stand up to scientific rigor? That's the crucial test.
Well before we conceived of dark energy, all the way back in the 1920s and 1930s, scientists derived how the entire Universe could have evolved within General Relativity. If you assumed that space, on the largest scales, was uniform -- with the same density and temperature everywhere -- there were only three viable scenarios to describe a Universe that was expanding today. If you fill a Universe with matter and radiation, like ours appears to be, gravity will fight the expansion, and the Universe can:

expand up to a point, reach a maximum size, and then begin contracting, eventually leading to a total recollapse.
expand and slow down somewhat, but gravitation is insufficient to ever stop or reverse it, and so it will eternally expand into the great cosmic abyss.
expand, with gravitation and the expansion balancing each other perfectly, so the expansion rate and the recession speed of everything asymptotes to zero, but never reverses. Those were the three classic fates of the Universe: big crunch, big freeze, or a critical Universe, which was right on the border between the two.
But then the crucial observations came in, and it turns out the Universe did none of those three things. For the first six billion years or so after the Big Bang, it appeared we lived in a critical Universe, with the initial expansion and the effects of gravitational attraction balancing one another almost perfectly. But when the density of the Universe dropped below a certain amount, a surprise emerged: distant galaxies began speeding up, away from us and one another. This cosmic acceleration was unexpected, but robust, and has continued at the same rate ever since, for the past 7.8 billion years.
Why was this happening? The current, known forms of energy in the Universe -- particles, radiation and fields -- can't account for it. So scientists hypothesized a new form of energy, dark energy, that could cause the Universe's expansion to accelerate. There could be a new field that permeates all of space causing it; it could be the zero-point energy of the quantum vacuum; it could be Einstein's cosmological constant from General Relativity. Current and planned observatories and experiments are looking for possible signatures that would distinguish or search for departures from any of these potential explanations, but so far all are consistent with being the true nature of dark energy.
But alternatives have been proposed as well. Adding a new type of energy to the Universe should be a last resort to explain a new observation, or even a new suite of observations. A lot of people were skeptical of its existence, so scientists began asking the question of what else could be occurring? What could mimic these effects? A number of possibilities immediately emerged:

Perhaps the distant supernovae weren't the same as nearby ones, and were inherently fainter?
Perhaps there was something about the environments in which the supernovae occurred that changed?
Perhaps the distant light, en-route, was undergoing an interaction that caused it to fail to reach our eyes?
Perhaps a new type of dust existed, making these distant objects appear systematically fainter?
Or could it be that the assumption on which these models are founded -- that the Universe is, on the largest scales, perfectly uniform -- is flawed enough that what appears to be dark energy is simply the "correct" prediction of Einstein's theory?
The light-blocking, light-losing, or systematic light-differences scenarios have all been ruled out by multiple approaches, as even if supernovae were removed from the equation entirely, the evidence for dark energy would still be overwhelming. With precision measurements of the cosmic microwave background, baryon acoustic oscillations, and the large-scale structures that form and fail-to-form in our Universe, the case that the Universe's expans
(FULL STORY)

Satellite galaxies at edge of Milky Way coexist with dark matter
[4/3/2017]
Research conducted by scientists at Rochester Institute of Technology rules out a challenge to the accepted standard model of the universe and theory of how galaxies form by shedding new light on a problematic structure.

The vast polar structure - a plane of satellite galaxies at the poles of the Milky Way - is at the center of a tug-of-war between scientists who disagree about the existence of mysterious dark matter, the invisible substance that, according to some scientists, comprises 85 percent of the mass of the universe.

A paper accepted for publication in the Monthly Notices for the Royal Astronomical Society bolsters the standard cosmological model, or the Cold Dark Matter paradigm, by showing that the vast polar structure formed well after the Milky Way and is an unstable structure.

The study, "Is the Vast Polar Structure of Dwarf Galaxies a Serious Problem for CDM?" - available online at https:/?/?arxiv.?org/?abs/?1612.?07325 - was co-authored by Andrew Lipnicky, a Ph.D. candidate in RIT's astrophysical sciences and technology program, and Sukanya Chakrabarti, assistant professor in RIT's School of Physics and Astronomy, whose grant from the National Science Foundation supported the research.

Lipnicky and Chakrabarti analyze the distribution of the classical Milky Way dwarf galaxies that form the vast polar structure and compares it to simulations of the "missing" or subhalo dwarf galaxies thought to be cloaked in dark matter.

Using motion measurements, the authors traced the orbits of the classical Milky Way satellites backward in time. Their simulations showed the vast polar structure breaking up and dispersing, indicating that the plane is not as old as originally thought and formed later in the evolution of the galaxy. This means that the vast polar structure of satellite galaxies may be a transient feature, Chakrabarti noted.

"If the planar structure lasted for a long time, it would be a different story," Chakrabarti said. "The fact that it disperses so quickly indicates that the structure is not dynamically stable. There is really no inconsistency between the planar structure of dwarf galaxies and the current cosmological paradigm."

The authors removed the classical Milky Way satellites Leo I and Leo II from the study when orbital analyses determined that the dwarf galaxies were not part of the original vast polar structure but later additions likely snatched from the Milky Way. A comparison excluding Leo I and II reveals a similar plane shared by classical galaxies and their cloaked counterparts.

"We tried many different combinations of the dwarf galaxies, including distributions of dwarfs that share similar orbits, but in the end found that the plane always dispersed very quickly," Lipnicky said.

Opposing scientific thought rejects the existence of dark matter. This camp calls into question the standard cosmological paradigm that accepts both a vast polar structure of satellite galaxies and a hidden plane of dark-matter cloaked galaxies. Lipnicky and Chakrabarti's study supports the co-existence of these structures and refutes the challenge to the accepted standard model of the universe.

Their research concurs with a 2016 study led by Nuwanthika Fernando, from the University of Sydney, which found that certain Milky Way planes are unstable in general. The paper published in the Monthly Notices for the Royal Astronomical Society.
(FULL STORY)

Magnetic hard drives go atomic
[3/11/2017]
Chop a magnet in two, and it becomes two smaller magnets. Slice again to make four. But the smaller magnets get, the more unstable they become; their magnetic fields tend to flip polarity from one moment to the next. Now, however, physicists have managed to create a stable magnet from a single atom.

The team, who published their work in Nature on 8 March1, used their single-atom magnets to make an atomic hard drive. The rewritable device, made from 2 such magnets, is able to store just 2 bits of data, but scaled-up systems could increase hard-drive storage density by 1,000 times, says Fabian Natterer, a physicist at the Swiss Federal Institute of Technology (EPFL) in Lausanne, and author of the paper.

“It’s a landmark achievement,” says Sander Otte, a physicist at Delft University of Technology in the Netherlands. “Finally, magnetic stability has been demonstrated undeniably in a single atom.”

Related stories
Nanoscience: Single-atom data storage
How DNA could store all the world’s data
Atom wranglers create rewritable memory
More related stories
Inside a regular hard drive is a disk split up into magnetized areas — each like a tiny bar magnet — the fields of which can point either up or down. Each direction represents a 1 or 0 — a unit of data known as a bit. The smaller the magnetized areas, the more densely data can be stored. But the magnetized regions must be stable, so that ‘1’s and ‘0’s inside the hard disk do not unintentionally switch

Current commercial bits comprise around 1 million atoms. But in experiments physicists have radically shrunk the number of atoms needed to store 1 bit — moving from 12 atoms in 20122 to now just one. Natterer and his team used atoms of holmium, a rare-earth metal, sitting on a sheet of magnesium oxide, at a temperature below 5 kelvin.

Holmium is particularly suitable for single-atom storage because it has many unpaired electrons that create a strong magnetic field, and they sit in an orbit close to the atom's centre where they are shielded from the environment. This gives holmium both a large and stable field, says Natterer. But the shielding has a drawback: it makes the holmium notoriously difficult to interact with. And until now, many physicists doubted whether it was possible to reliably determine the atom’s state.

Bits of data
To write the data onto a single holmium atom, the team used a pulse of electric current from the magnetized tip of scanning tunnelling microscope, which could flip the orientation of the atom's field between a 0 or 1. In tests the magnets proved stable, each retaining their data for several hours, with the team never seeing one flip unintentionally. They used the same microscope to read out the bit — with different flows of current revealing the atom’s magnetic state.

To further prove that the tip could reliably read the bit, the team — which included researchers from the technology company IBM — devised a second, indirect, read-out method. They used a neighbouring iron atom as a magnetic sensor, tuning it so that its electronic properties depended on the orientation of the two holmium atomic magnets in the 2-bit system. The method also allows the team to read out multiple bits at the same time, says Otte, making it more practical and less invasive than the microscope technique.

Using individual atoms as magnetic bits would radically increase the density of data storage, and Natterer says that his EPFL colleagues are working on ways to make large arrays of single-atom magnets. But the 2-bit system is still far from practical applications and well behind another kind of single-atom storage, which encodes data in atoms’ positions, rather than in their magnetization, and has already built a 1-kilobyte (8,192-bit) rewritable data storage device.

One advantage of the magnetic system, however, is that it could be compatible with spintronics, says Otte. This emerging technology uses magnetic states not just to store data, but to move information around a computer in place of electric current, and would make for much more energy-efficient systems.

In the near term, physicists are more excited about studying the single-atom magnets. Natterer, for example, plans to observe three mini-magnets that are oriented so their fields are in competition with each other — so they continually flip. “You can now play around with these single-atom magnets, using them like Legos, to build up magnetic structures from scratch,” he says.

Nature doi:10.1038/nature.2017.21599
Read the related News & Views article: 'Single-atom data storage'
(FULL STORY)

Could Mysterious Cosmic Light Flashes Be Powering Alien Spacecraft?
[3/10/2017]
Partner Series

Bizarre flashes of cosmic light may actually be generated by advanced alien civilizations, as a way to accelerate interstellar spacecraft to tremendous speeds, a new study suggests.

Astronomers have catalogued just 20 or so of these brief, superbright flashes, which are known as fast radio bursts (FRBs), since the first one was detected in 2007. FRBs seem to be coming from galaxies billions of light-years away, but what's causing them remains a mystery.

"Fast radio bursts are exceedingly bright given their short duration and origin at great distances, and we haven't identified a possible natural source with any confidence," study co-author Avi Loeb, a theorist at the Harvard-Smithsonian Center for Astrophysics, said in a statement Thursday (March 9). "An artificial origin is worth contemplating and checking." [5 Bold Claims of Alien Life]

Advertisement

One potential artificial origin, according to the new study, might be a gigantic radio transmitter built by intelligent aliens. So Loeb and lead author Manasvi Lingam, of Harvard University, investigated the feasibility of this possible explanation.

Artist's illustration of a light sail powered by a radio beam (red) generated on the surface of a planet. The leakage from such beams as they sweep across the sky would appear as superbright light flashes known as fast radio bursts, according to a new study.
Artist's illustration of a light sail powered by a radio beam (red) generated on the surface of a planet. The leakage from such beams as they sweep across the sky would appear as superbright light flashes known as fast radio bursts, according to a new study.
Credit: M. Weiss/CfA
The duo calculated that a solar-powered transmitter could indeed beam FRB-like signals across the cosmos — but it would require a sunlight-collecting area twice the size of Earth to generate the necessary power.

And the huge amounts of energy involved wouldn't necessarily melt the structure, as long as it was water-cooled. So, Lingam and Loeb determined, such a gigantic transmitter is technologically feasible (though beyond humanity's current capabilities).

Why would aliens build such a structure? The most plausible explanation, according to the study team, is to blast interstellar spacecraft to incredible speeds. These craft would be equipped with light sails, which harness the momentum imparted by photons, much as regular ships' sails harness the wind. (Humanity has demonstrated light sails in space, and the technology is the backbone of Breakthrough Starshot, a project that aims to send tiny robotic probes to nearby star systems.)

Indeed, a transmitter capable of generating FRB-like signals could drive an interstellar spacecraft weighing 1 million tons or so, Lingam and Loeb calculated.

"That's big enough to carry living passengers across interstellar or even intergalactic distances," Lingam said in the same statement.

Humanity would catch only fleeting glimpses of the "leakage" from these powerful beams (which would be trained on the spacecraft's sail at all times), because the light source would be moving constantly with respect to Earth, the researchers pointed out.



The duo took things a bit further. Assuming that ET is responsible for most FRBs, and taking into account the estimated number of potentially habitable planets in the Milky Way (about 10 billion), Lingam and Loeb calculated an upper limit for the number of advanced alien civilizations in a galaxy like our own: 10,000.

Lingam and Loeb acknowledge the speculative nature of the study. They aren't claiming that FRBs are indeed caused byaliens; rather, they're saying that this hypothesis is worthy of consideration.

"Science isn't a matter of belief; it's a matter of evidence," Loeb said. "Deciding what’s likely ahead of time limits the possibilities. It's worth putting ideas out there and letting the data be the judge."

The new study has been accepted for publication in The Astrophysical Journal Letters. You can read it for free on the online preprint site arXiv.org.
(FULL STORY)

NASA is Going to Create The Coldest Spot in the Known Universe
[3/8/2017]
Creating Cold Atom Lab

This summer, a box the size of an ice chest will journey to the International Space Station (ISS). Once there, it will become the coldest spot in the universe—more than 100 million times colder than deep space itself. The instruments inside the box — an electromagnetic “knife,” lasers, and a vacuum chamber — will slow down gas particles until they are almost motionless, bringing them just a billionth of a degree above absolute zero.

This box and its instruments are called the Cold Atom Laboratory (CAL). CAL was developed by the Jet Propulsion Laboratory (JPL), which is funded by NASA. Right now at JPL, CAL is in the final assembly stages, and getting ready for its trip to space which is set for August 2017. CAL will be hitching a ride on SpaceX CRS-12.

Once in space on the ISS, five scientific teams plan will use CAL to conduct experiments. Among them is the team headed by Eric Cornell, one of the scientists who won the Nobel Prize for creating Bose-Einstein condensates in a lab setting in 1995.

Seeing the Other 95%

Atoms that are cooled to extreme temperatures can form a unique state of matter: a Bose-Einstein condensate. This state is important scientifically because in it, the laws of quantum physics take over and we can observe matter behaving more like waves and less like particles. However, these rows of atoms, which move together like waves, can only be observed for fractions of a second on Earth because gravity causes atoms to move towards the ground. CAL achieves new low temperatures for longer observation of these mysterious waveforms.

dark-matter-nasa
CLICK HERE TO VIEW FULL INFOGRAPHIC
Although NASA has never observed or created Bose-Einstein condensates in space, ultra-cold atoms can hold their wave-like forms longer while in freefall on the International Space Station. JPL Project Scientist Robert Thompson believes CAL will render Bose-Einstein condensates observable for up to five to 10 seconds. He also believes that improvements to CAL’s technologies could allow for hundreds of seconds of observation time.

“Studying these hyper-cold atoms could reshape our understanding of matter and the fundamental nature of gravity,” said Thompson. “The experiments we’ll do with the Cold Atom Lab will give us insight into gravity and dark energy—some of the most pervasive forces in the universe.”

These experiments could potentially lead to improved technologies, including quantum computers, sensors, and atomic clocks for navigation on spacecraft. CAL deputy project manager Kamal Oudrhiri of JPL cites dark energy detection applications as “especially exciting.” Current physics models indicate that the universe is about 68 percent dark energy, 27 percent dark matter, and 5 percent ordinary matter.

“This means that even with all of our current technologies, we are still blind to 95 percent of the universe,” Oudrhiri said. “Like a new lens in Galileo’s first telescope, the ultra-sensitive cold atoms in the Cold Atom Lab have the potential to unlock many mysteries beyond the frontiers of known physics.”
(FULL STORY)

Testing theories of modified gravity
[3/1/2017]
Physics Today 70, 3, 21 (2017); doi: http://dx.doi.org/10.1063/PT.3.3485
The accelerated expansion of the universe is usually attributed to a mysterious dark energy, but there’s another conceivable explanation: modified gravity. Unmodified gravity—that is, Einstein’s general relativity— satisfactorily accounts for the dynamics of the solar system, where precision measurements can be made without the confounding influence of dark matter. Nor have any violations been detected in one of general relativity’s principal ingredients, the strong equivalence principle, which posits that inertial mass and gravitational mass are identical.
But those observational constraints are not ineluctable. In particular, a class of gravitational theories called Galileon models can also pass them. In 2012 Lam Hui and Alberto Nicolis of Columbia University devised a cosmic test that could refute or confirm the models. Their test hinges on the models’ central feature: an additional scalar field that couples to mass. The coupling can be characterized by a charge-like parameter, Q. For most cosmic objects, Q has the same value as the inertial mass. But for a black hole, whose mass arises entirely from its gravitational binding energy, Q is zero; the strong equivalence principle is violated.
Galaxies fall through space away from low concentrations of mass and toward high concentrations. The supermassive black holes at the centers of some galaxies are carried along with the flow. But if gravity has a Galileon component, the black hole feels less of a tug than do the galaxy’s stars, interstellar medium, and dark-matter particles. The upshot, Hui and Nicolis realized, is that the black hole will lag the rest of the galaxy and slip away from its center. The displacement is arrested when the black hole reaches the point where the lag is offset by the presence of more of the galaxy’s gravitational mass on one side of the black hole than on the other. Given the right circumstances, the displacement can be measured.
Hui and Nicolis’s proposal has now itself been put to the test. Asha Asvathaman and Jeremy Heyl of the University of British Columbia, together with Hui, have applied it to two galaxies: M32, which is being pulled toward its larger neighbor, the Andromeda galaxy, and M87 (shown here), which is being pulled through the Virgo cluster of galaxies. Both M32 and M87 are elliptical galaxies. Because of their simple shapes, their centroids can be determined from optical observations. The locations of their respective black holes can be determined from radio observations. Although the limit on Galileon gravity that Asvathaman, Heyl, and Hui derived was too loose to refute or confirm the theory, they nevertheless validated the test itself. More precise astrometric observations could make it decisive. (A. Asvathaman, J. S. Heyl, L. Hui, Mon. Not. R. Astron. Soc. 465, 3261, 2017, doi:10.1093/mnras/stw2905.)
(FULL STORY)

First Solid Sign that Matter Doesn't Behave Like Antimatter
[2/27/2017]
One of the biggest mysteries in physics is why there's matter in the universe at all. This week, a group of physicists at the world's largest atom smasher, the Large Hadron Collider, might be closer to an answer: They found that particles in the same family as the protons and neutrons that make up familiar objects behave in a slightly different way from their antimatter counterparts.

While matter and antimatter have all of the same properties, antimatter particles carry charges that are the opposite of those in matter. In a block of iron, for example, the protons are positively charged and the electrons are negatively charged. A block of antimatter iron would have negatively charged antiprotons and positively charged antielectrons (known as positrons). If matter and antimatter come in contact, they annihilate each other and turn into photons (or occasionally, a few lightweight particles such as neutrinos). Other than that, a piece of matter and antimatter should behave in the same way, and even look the same — a phenomenon called charge-parity (CP) symmetry. [The 18 Biggest Unsolved Mysteries in Physics]

Besides the identical behavior, CP symmetry also implies that the amount of matter and antimatter that was formed at the Big Bang, some 13.7 billion years ago, should have been equal. Clearly it was not, because if that were the case, then all the matter and antimatter in the universe would have been annihilated at the start, and even humans wouldn't be here.

But if there were a violation to this symmetry — meaning some bit of antimatter were to behave in a way that was different from its matter counterpart — perhaps that difference could explain why matter exists today.

To look for this violation, physicists at the Large Hadron Collider, a 17-mille-long (27 kilometers) ring beneath Switzerland and France, observed a particle called a lambda-b baryon. Baryons include the class of particles that make up ordinary matter; protons and neutrons are baryons. Baryons are made of quarks, and antimatter baryons are made of antiquarks. Both quarks and antiquarks come in six "flavors": up, down, top, bottom (or beauty), strange and charm, as scientists call the different varieties. A lambda-b is made of one up, one down and one bottom quark. (A proton is made of two up and one down, while a neutron consists of two down and one up quark.)

If the lambda and its antimatter sibling show CP symmetry, then they would be expected to decay in the same way. Instead, the team found that the lambda-b and antilambda-b particles decayed differently. Lambdas decay in two ways: into a proton and two charged particles called pi mesons (or pions), or into a proton and two K mesons (or kaons). When particles decay, they throw off their daughter particles at a certain set of angles. The matter and antimatter lambdas did that, but the angles were different. [7 Strange Facts About Quarks]

This is not the first time matter and antimatter have behaved differently. In the 1960s, scientists studied kaons themselves, which also decayed in a way that was different from their antimatter counterparts. B mesons — which consist of a bottom quark and an up, down, strange or charm quark — have also shown similar "violating" behavior.

Mesons, though, are not quite like baryons. Mesons are pairs of quarks and antiquarks. Baryons are made of ordinary quarks only, and antibaryons are made of antiquarks only. Discrepancies between baryon and antibaryon decays had never been observed before.

"Now we have something for baryons," Marcin Kucharczyk, an associate professor at the Institute of Nuclear Physics of the Polish Academy of Sciences, which collaborated on the LHC experiment, told Live Science. "When you'd observed mesons, it was not obvious that for baryons it was the same."

While tantalizing, the results were not quite solid enough to count as a discovery. For physicists, the measure of statistical significance, which is a way of checking whether one's data could happen by chance, is 5 sigma. Sigma refers to standard deviations, and a 5 means that there is only a 1 in 3.5 million chance that the results would occur by chance. This experiment got to 3.3 sigma — good, but not quite there yet. (That is, 3.3 sigma means that there's about a 1 in 4,200 chance that the observation would have occurred randomly, or about a 99-percent confidence level.)

The findings are not a complete answer to the mystery of why matter dominates the universe, Kucharczyk said.

"It cannot explain the asymmetry fully," he said. "In the future, we will have more statistics, and maybe for other baryons."

The findings are detailed in the Jan. 30 issue of the journal Nature Physics
(FULL STORY)

Physicists investigate erasing information at zero energy cost
[2/22/2017]
(Phys.org)—A few years ago, physicists showed that it's possible to erase information without using any energy, in contrast to the assumption at the time that erasing information must require energy. Instead, the scientists showed that the cost of erasure could be paid in terms of an arbitrary physical quantity such as spin angular momentum—suggesting that heat energy is not the only conserved quantity in thermodynamics.
Investigating this idea further, physicists Toshio Croucher, Salil Bedkihal, and Joan A. Vaccaro at the Centre for Quantum Dynamics, Griffith University, Brisbane, Queensland, Australia, have now discovered some interesting results about the tiny fluctuations in the spin cost of erasing information. The work could lead to the development of new types of heat engines and information processing devices.
As the scientists explain in a new paper published in Physical Review Letters, the possibility that information can be erased at zero energy cost is surprising at first due to the fact that energy and entropy are so closely related in thermodynamics. In the context of information, information erasure corresponds to entropy erasure (or a decrease in entropy) and therefore requires a minimum amount of energy, which is determined by Landauer's erasure principle.
Since Landauer's erasure principle is equivalent to the second law of thermodynamics, the zero-energy erasure scheme using arbitrary conserved quantities can be thought of as a generalized second law of thermodynamics. This idea dates back to at least 1957, when E. T. Jaynes proposed an alternative to the second law in which heat energy is thought of in a more general way than unusual, so that heat incorporates other kinds of conserved quantities.
Applying this framework to information erasure, in 2011 Vaccaro and Stephen Barnett showed that the energy cost of information erasure can be substituted with one or more different conserved quantities—specifically, spin angular momentum.
One important difference between heat energy and spin angular momentum is that, while heat may or may not be quantized, spin angular momentum is an intrinsically quantum mechanical property, and so it is always quantized. This has implications when it comes to accounting for tiny fluctuations in these quantities that become significant when designing systems at the nanoscale.

Scientists have only recently investigated these fluctuations in the context of the Landauer principle, where they found that these fluctuations are quickly suppressed by something called the Jarzynski equality. This means that heat energy fluctuations have only a very tiny probability of violating the Landauer principle.
In the new study, the scientists have for the first time investigated the corresponding discrete fluctuations that arise when erasing information using spin.
Among their results, the researchers found that the discrete fluctuations are suppressed even more quickly than predicted by the corresponding Jarzynski equality for "spinlabor"—a new term the scientists devised that means the spin equivalent of work. This is the first evidence of beating this bound in an information erasure context. The quick suppression means that the fluctuations have an extremely low probability of using less than the minimal cost required to erase information using spin, as given by the Vaccaro-Barnett bound, which is the spin equivalent of the Landauer principle.
"Our work generalizes fluctuation relations for erasure using arbitrary conserved quantities and exposes the role of discreteness in the context of erasure," Bedkihal told Phys.org. "We also obtained a probability of violation bound that is tighter than the corresponding Jarzynski bound. This is a statistically significant result."
The scientists also point out that this process of erasing information with spin has already been experimentally demonstrated, although it appears to have gone unnoticed. In spin-exchange optical pumping, light is used to excite electrons in an atom to a higher energy level. For the electrons to return to their lower energy level during the relaxation process, atoms and nuclei collide with each other and exchange spins. This entropy-decreasing process can be considered analogous to erasing information at a cost of spin exchange.
Overall, the new results reveal insight into the thermodynamics of spin and could also guide the development of future applications. These could include new kinds of heat engines and information processing devices based on erasure that use inexpensive, locally available resources such as spin angular momentum. The researchers plan to further pursue these possibilities in the future.
"The erasure mechanism can be used to design generalized heat engines operating under the reservoirs of multiple conserved quantities such as a thermal reservoir and a spin reservoir," Bedkihal said. "For example, one may design heat engines using semiconductor quantum dot systems where lattice vibrations constitute a thermal reservoir and nuclear spins constitute a polarized spin reservoir. Such heat engines go beyond the traditional Carnot heat engine that operates under two thermal reservoirs."
Explore further: Scientists show how to erase information without using energy
More information: Toshio Croucher, Salil Bedkihal, and Joan A. Vaccaro. "Discrete Fluctuations in Memory Erasure without Energy Cost." Physical Review Letters. DOI: 10.1103/PhysRevLett.118.060602, Also at arXiv:1604.05795 [quant-ph]
(FULL STORY)

NASA Just Found A Solar System With 7 Earth-Like Planets
[2/22/2017]
AN OCEAN OF WORLDS

Today, scientists working with telescopes at the European Southern Observatory and NASA announced a remarkable new discovery: An entire system of Earth-sized planets. If that’s not enough, the team asserts that the density measurements of the planets indicates that the six innermost are Earth-like rocky worlds.

CLICK IMAGE TO SEE FULL INFOGRAPHIC
CLICK IMAGE TO SEE FULL INFOGRAPHIC
And that’s just the beginning.

Three of the planets lie in the star’s habitable zone. If you aren’t familiar with the term, the habitable zone (also known as the “goldilocks zone”) is the region surrounding a star in which liquid water could theoretically exist. This means that all three of these alien worlds may have entire oceans of water, dramatically increasing the possibility of life. The other planets are less likely to host oceans of water, but the team states that liquid water is still a possibility on each of these worlds.

Summing the work, lead author Michaël Gillon notes that this solar system has the largest number of Earth-sized planets yet found and the largest number of worlds that could support liquid water: “This is an amazing planetary system — not only because we have found so many planets, but because they are all surprisingly similar in size to the Earth!”

Co-author Amaury Triaud notes that the star in this system is an “ultracool dwarf,” and he clarifies what this means in relation to the planets: “The energy output from dwarf stars like TRAPPIST-1 is much weaker than that of our Sun. Planets would need to be in far closer orbits than we see in the Solar System if there is to be surface water. Fortunately, it seems that this kind of compact configuration is just what we see around TRAPPIST-1.”

REACHING ANOTHER WORLD

The system is just 40 light-years away. On a cosmic scale, that’s right next door. Of course, practically speaking, it would still take us hundreds of millions of years to get there with today’s technology – but again, it is notable in that the find speaks volumes about the potential for life-as-we-know-it beyond Earth.

Moreover, the technology of tomorrow could get us to this system a lot faster.

These new discoveries ultimately mean that TRAPPIST-1 is of monumental importance for future study. The Hubble Space Telescope is already being used to search for atmospheres around the planets, and Emmanuël Jehin, a scientist who also worked on the research, asserts that future telescopes could allow us to truly see into the heart of this system: “With the upcoming generation of telescopes, such as ESO’s European Extremely Large Telescope and the NASA/ESA/CSA James Webb Space Telescope, we will soon be able to search for water and perhaps even evidence of life on these worlds.”
(FULL STORY)

Nearby Star Has 7 Earth-Sized Worlds - Most In Habitable Zone
[2/21/2017]
It will be announced tomorrow by NASA that Michael Gillon et al have confirmed 4 more Earth-sized planets circling TRAPPIST-1 in addition to 3 already discovered.

It is possible that most of the planets confirmed thus circling far TRAPPIST-1 could be in the star's habitable zone. The inner 6 planets are probably rocky in composition and may be just the right temperature for liquid water to exist (between 0 - 100 degrees C) - if they have any water, that is. The outermost 7th planet still needs some more observations to nail down its orbit and composition.
(FULL STORY)

Data About 2 Distant Asteroids: Clues to the Possible Planet Nine
[2/22/2017]
In the year 2000 the first of a new class of distant solar system objects was discovered, orbiting the Sun at a distance greater than that of Neptune: the "extreme trans Neptunian objects (ETNOs).

Their orbits are very far from the Sun compared with that of the Earth. We orbit the Sun at a mean distance of one astronomical unit (1 AU which is 150 million kilometres) but the ETNOs orbit at more than 150 AU. To give an idea of how far away they are, Pluto's orbit is at around 40 AU and its closest approach to the Sun (perihelion) is at 30 AU. This discovery marked a turning point in Solar System studies, and up to now, a total of 21 ETNOs have been identified.

Recently, a number of studies have suggested that the dynamical parameters of the ETNOs could be better explained if there were one or more planets with masses several times that of the Earth orbiting the Sun at distances of hundreds of AU. In particular, in 2016 the researchers Brown and Batygin used the orbits of seven ETNOs to predict the existence of a "superearth" orbiting the sun at some 700 AU. This range of masses is termed sub Neptunian. This idea is referred to as the Planet Nine Hypothesis and is one of the current subjects of interest in planetary science. However, because the objects are so far away the light we receive from them is very weak and until now the only one of the 21 trans Neptunian objects observed spectroscopically was Sedna.

Now, a team of researchers led by the Instituto de Astrofísica de Canarias (IAC) in collaboration with the Complutense University of Madrid has taken a step towards the physical characterization of these bodies, and to confirm or refute the hypothesis of Planet Nine by studying them. The scientists have made the first spectroscopic observations of 2004 VN112 and 2013 RF98, both of them particularly interesting dynamically because their orbits are almost identical and the poles of the orbits are separated by a very small angle. This suggest a common origin, and their present-day orbits could be the result of a past interaction with the hypothetical Planet Nine. This study, recently published in Monthly Notices of the Royal Astronomical Society, suggests that this pair of ETNOs was a binary asteroid which separated after an encounter with a planet beyond the orbit of Pluto.

To reach these conclusions, they made the first spectroscopic observations of 2004 VN112 and 2013 RF98 in the visible range. These were performed in collaboration with the support astronomers Gianluca Lombardi and Ricardo Scarpa, using the OSIRIS spectrograph on the Gran Telescopio CANARIAS (GTC), situated in the Roque de los Muchachos Observatory (Garafía, La Plama). It was hard work to identify these asteroids because their great distance means that their apparent movement on the sky is very slow. Then, they measured their apparent magnitudes (their brightness as seen from Earth) and also recalculated the orbit of 2013 RF98, which had been poorly determined. They found this object at a distance of more than an arcminute away from the position predicted from the ephemerides. These observations have helped to improve the computed orbit, and have been published by the Minor Planet Center (MPEC 2016-U18: 2013 RF98), the organism responsible for the identification of comets and minor planets (asteroids) as well as for measurements of their parameters and orbital positions.

The visible spectrum can give some information also about their composition. By measuring the slope of the spectrum, can be determined whether they have pure ices on their surfaces, as is the case for Pluto, as well as highly processed carbon compounds. The spectrum can also indicate the possible presence of amorphous silicates, as in the Trojan asteroids associated with Jupiter. The values obtained for 2004 VN112 and 2013 RF98 are almost identical and similar to those observed photometrically for two other ETNOs, 2000 CR105 and 2012 VP113. Sedna, however, the only one of these objects which had been previously observed spectroscopically, shows very different values from the others. These five objects are part of the group of seven used to test the hypothesis of Planet Nine, which suggests that all of them should have a common origin, except for Sedna, which is thought to have come from the inner part of the Oort cloud.

"The similar spectral gradients observed for the pair 2004 VN112 - 2013 RF98 suggests a common physical origin", explains Julia de León, the first author of the paper, an astrophysicist at the IAC. "We are proposing the possibility that they were previously a binary asteroid which became unbound during an encounter with a more massive object". To validate this hypothesis, the team performed thousands of numerical simulations to see how the poles of the orbits would separate as time went on. The results of these simulations suggest that a possible Planet Nine, with a mass of between 10 and 20 Earth masses orbiting the Sun at a distance between 300 and 600 AU could have deviated the pair 2004 VN112 - 2013 RF98 around 5 and 10 million years ago. This could explain, in principle, how these two asteroids, starting as a pair orbiting one another, became gradually separated in their orbits because they made an approach to a much more massive object at a particular moment in time.

Please follow SpaceRef on Twitter and Like us on Facebook.
(FULL STORY)

Tune Your Radio: Galaxies Sing When Forming Stars
[2/22/2017]
Almost all the light we see in the universe comes from stars which form inside dense clouds of gas in the interstellar medium.

The rate at which they form (referred to as the star formation rate, or SFR) depends on the reserves of gas in the galaxies and the physical conditions in the interstellar medium, which vary as the stars themselves evolve. Measuring the star formation rate is hence key to understand the formation and evolution of galaxies.

Until now, a variety of observations at different wavelengths have been performed to calculate the SFR, each with its advantages and disadvantages. As the most commonly used SFR tracers, the visible and the ultraviolet emission can be partly absorbed by interstellar dust. This has motivated the use of hybrid tracers, which combine two or more different emissions, including the infrared, which can help to correct this dust absorption. However, the use of these tracers is often uncertain because other sources or mechanisms which are not related to the formation of massive stars can intervene and lead to confusion.

Now, an international research team led by the IAC astrophysicist Fatemeh Tabatabaei has made a detailed analysis of the spectral energy distribution of a sample of galaxies, and has been able to measure, for the first time, the energy they emit within the frequency range of 1-10 Gigahertz which can be used to know their star formation rates. "We have used" explains this researcher "the radio emission because, in previous studies, a tight correlation was detected between the radio and the infrared emission, covering a range of more than four orders of magnitude". In order to explain this correlation, more detailed studies were needed to understand the energy sources and processes which produce the radio emission observed in the galaxies.

"We decided within the research group to make studies of galaxies from the KINGFISH sample (Key Insights on Nearby Galaxies: a Far-Infrared Survey with Herschel) at a series of radio frequencies", recalls Eva Schinnerer from the Max-Planck-Institut für Astronomie (MPIA) in Heidelberg, Germany. The final sample consists of 52 galaxies with very diverse properties. "As a single dish, the 100-m Effelsberg telescope with its high sensitivity is the ideal instrument to receive reliable radio fluxes of weak extended objects like galaxies", explains Marita Krause from the Max-Planck-Institut für Radioastronomie (MPIfR) in Bonn, Germany, who was in charge of the radio observations of those galaxies with the Effelsberg radio telescope. "We named it the KINGFISHER project, meaning KINGFISH galaxies Emitting in Radio."

The results of this project, published today in The Astrophysical Journal, show that the 1-10 Gigahertz radio emission used is an ideal star formation tracer for several reasons. Firstly, the interstellar dust does not attenuate or absorb radiation at these frequencies; secondly, it is emitted by massive stars during several phases of their formation, from young stellar objects to HII regions (zones of ionized gas) and supernova remnants, and finally, there is no need to combine it with any other tracer. For these reasons, measurements in the chosen range are a more rigorous way to estimate the formation rate of massive stars than the tracers traditionally used.

This study also clarifies the nature of the feedback processes occurring due to star formation activity, which are key in evolution of galaxies. "By differentiating the origins of the radio continuum, we could infer that the cosmic ray electrons (a component of the interstellar medium) are younger and more energetic in galaxies with higher star formation rates, which can cause powerful winds and outflows and have important consequences in regulation of star formation", explains Fatemeh Tabatabaei.

Article: "The radio spectral energy distribution and star formation rate calibration in galaxies", by F. Tabatabaei et al. The Astrophysical Journal. Volume 836, Number 2. (DOI: 10.3847/1538-4357/836/2/185)

http://iopscience.iop.org/article/10.3847/1538-4357/836/2/185
(FULL STORY)

Coders Race to Save NASA's Climate Data
[2/14/2017]
A group of coders is racing to save the government's climate science data.

On Saturday (Feb. 11), 200 programmers crammed themselves into the Doe Library at the University of California, Berkeley, furiously downloading NASA's Earth science data in a hackathon, Wired reported. The group's goal: rescue data that may be deleted or hidden under President Donald Trump's administration.

The process involves developing web-crawler scripts to trawl the internet, finding federal data and patching it together into coherent data sets. The hackers are also keeping track of data as it disappears; for instance, the Global Data Center's reports and one of NASA's atmospheric carbon dioxide (CO2) data sets has already been removed from the web.

By the end of Saturday, when the hackathon concluded, the coders had successfully downloaded thousands of pages — essentially all of NASA's climate data — onto the Internet Archive, a digital library.

But there is still more to be done. While the climate data may be safe for now, many other data sets out there could be lost, such as National Parks Service data on GPS boundaries and species tallies, Wired reported.

"Climate change data is just the tip of the iceberg," Eric Kansa, an anthropologist who manages archaeological data archiving for the nonprofit group Open Context, told Wired. "There are a huge number of other data sets being threatened [that are rich] with cultural, historical, sociological information."

Originally published on Live Science.

Editor's Recommendations

The Reality of Climate Change: 10 Myths Busted
NASA's Climate Change Data Key To Preparing Cities For Possible Catastrophes | Video
50 Interesting Facts About Earth
(FULL STORY)

You Can Help Scientists Find the Next Earth-Like Planet
[2/14/2017]
GRAVITATIONAL WOBBLES

NASA’s Kepler space telescope holds the record when it comes to candidate and confirmed exoplanets — to date, it has identified more than 5,000. To scan the universe for these alien planets, Kepler uses what’s called the “transit method.” Basically, Kepler watches out for the brightness dips that occur when a planet crosses the face of the star it orbits.

This isn’t the only method to catch exoplanets. The High Resolution Echelle Spectrometer (HIRES) instrument at the Keck Observatory in Hawaii detects radial velocity instead of brightness dips. This radial velocity method searches stars for signs of gravitational wobbles induced by orbiting planets. HIRES was part of a two-decade long radical velocity-planet hunting program and it has compiled almost 61,000 individual measurements made of more than 1,600 stars.

“HIRES was not specifically optimized to do this type of exoplanet detective work, but has turned out to be a workhorse instrument of the field,” said Steve Vogt, from the University of California Santa Cruz, who built the instrument. “I am very happy to contribute to science that is fundamentally changing how we view ourselves in the universe.”

From this huge amount of data, a team of researchers led by Paul Butler of the Carnegie Institution for Science in Washington, D.C., identified more than 100 possible exoplanets. Specifically, the researchers identified 60 candidate planets, plus 54 more that require further examination. They published their study in the The Astronomical Journal.

“We were very conservative in this paper about what counts as an exoplanet candidate and what does not,” researcher Mikko Tuomi explained, “and even with our stringent criteria, we found over 100 new likely planet candidates.” Among the candidate exoplanets, one could be orbiting the fourth-closest star (GJ 411) to our Sun just about 8.3 light years away. It’s not an Earth-twin however, as this potential planet has an orbital period that’s equivalent to just 10 days.

COLLABORATIVE EXPLORATION

There’s still a considerable amount of data to comb through. So, together with their findings, Butler’s team made the HIRES data set available to the public. “One of our key goals in this paper is to democratize the search for planets,” explained team member Greg Laughlin of Yale. “Anyone can download the velocities published on our website and use the open source Systemic software package and try fitting planets from the data.”

It’s certainly a noble idea and a timely one. “I think this paper sets a precedent for how the community can collaborate on exoplanet detection and follow-up”, said team-member Johanna Teske. “With NASA’s TESS mission on the horizon, which is expected to detect 1000+ planets orbiting bright, nearby stars, exoplanet scientists will soon have a whole new pool of planets to follow up.”

Other tools that can facilitate this search for exoplanets and potentially habitable ones include the recently completed James Webb Space Telescope (JWST). Its powerful array of lenses and mirrors will give our ability to scan the universe a much appreciated boost. Technological advances like the JWST, NASA’s TESS, and a couple of other interstellar eyes will allow us to see the universe like never before.
(FULL STORY)

Scientists Discover Over 100 New Exoplanets
[2/14/2017]
An international team of astronomers has announced the discovery of over a hundred new exoplanet candidates. These exoplanets were found using two decades' worth of data from the Keck Observatory in Hawaii. Their results were recently published in a paper in the Astronomical Journal, and among the discoveries is a planet orbiting the fourth-closest star to our own, only 8 light-years away.

Finding exoplanets isn't easy. Planets beyond our solar system are tiny and dark when compared to their host stars, so some advanced techniques have to be used to pinpoint them. The Kepler space telescope, for instance, finds exoplanets by looking for stars that regularly dim slightly. This dimming is caused by an exoplanet blocking some of the star's light when it passes in front, and the change in brightness can tell us a lot about the size of the planet and how fast it orbits.

However, there are additional ways to spot an exoplanet. The Keck Observatory uses a different method, called the radial velocity method, that looks at how the star moves. When a planet orbits a star, the planet's gravity causes the star to wobble a little bit. For instance, our own planet causes the sun to move a few inches per second, while Jupiter causes the sun to move about 40 feet per second. This wobbling is detectable by very sensitive telescopes, like the HIRES spectrometer at the Keck Observatory.

Before Kepler, the radial velocity method was the best way to find new exoplanets. Scientists using this method have found hundreds of worlds over the past 25 years, and we found the very first known exoplanet using this method. However, studying a star's radial velocity typically requires a lot of time for observation in order to separate the signal and any interferences.

The Keck data spans two decades, which is more than enough time to separate out the signal, and the data covers so many star systems that it could potentially contain evidence for thousands of new exoplanets. In fact, the dataset is so massive that one group of people could never get though all of it. To solve this problem, the team is releasing their data to the public, in the hopes that people will use that data to find even more exoplanets. If you're interested in discovering your very own alien planet, you can find the data and instructions on the team's website here.
(FULL STORY)

Why These Scientists Fear Contact With Space Aliens
[2/8/2017]
The more we learn about the cosmos, the more it seems possible that we are not alone. The entire galaxy is teeming with worlds, and we're getting better at listening — so the question, "Is there anybody out there?" is one we may be able to answer soon.

But do we really want to know? If aliens are indeed out there, would they be friendly explorers, or destroyers of worlds? This is a serious question no longer confined to science fiction, because a growing group of astronomers has taken it upon themselves to do more than just listen. Some are advocating for a beacon swept across the galaxy, letting E.T. know we're home, to see if anyone comes calling. Others argue we would be wise to keep Earth to ourselves.

"There's a possibility that if we actively message, with the intention of getting the attention of an intelligent civilization, that the civilization we contact would not necessarily have our best interests in mind," says Lucianne Walkowicz, an astrophysicist at the Adler Planetarium in Chicago. "On the other hand, there might be great benefits. It could be something that ends life on Earth, and it might be something that accelerates the ability to live quality lives on Earth. We have no way of knowing." Like many other astronomers, Walkowicz isn't convinced one way or the other — but she said the global scientific community needs to talk about it.

Internet investor and science philanthropist Yuri Milner shows the Starchip, a microelectronic component spacecraft. The $100 million project is aimed at establishing the feasibility of sending a swarm of tiny spacecraft, each weighing far less than an ounce, to the Alpha Centauri star system.

That conversation is likely to heat up soon thanks to the Breakthrough Initiatives, a philanthropic organization dedicated to interstellar outreach that's funded by billionaire Russian tech mogul Yuri Milner. Its Breakthrough Message program would solicit ideas from around the world to compose a message to aliens and figure out how to send it. Outreach for the program may launch as soon as next year, according to Pete Worden, the Breakthrough Initiatives' director.

"We're well aware of the argument, 'Do you send things or not?' There's pretty vigorous opinion on both sides of our advisory panel," Worden says. "But it's a very useful exercise to start thinking about what to respond. What's the context? What best represents the people on Earth? This is an exercise for humanity, not necessarily just about what we would send." Members of the advisory panel have argued that a picture (and the thousand words it may be worth) would be the best message.

Next comes "more of a technical expertise question," Wordon says. "Given that you have an image or images, how do you best encrypt it so it can be received?"

Breakthrough Message will work on those details, including how to transmit the pictures, whether through radio or laser transmitters; how to send it with high fidelity, so it's not rendered unreadable because of interference from the interstellar medium; which wavelengths of light to use, or whether to spread a message across a wide spectrum; how many times to send it, and how often; and myriad other technical concerns.

The scientific community continues to debate these questions. For instance, Philip Lubin of the University of California, Santa Barbara, has published research describing a laser array that could conceivably broadcast a signal through the observable universe.

Breakthrough is also working on where to send such a message, Worden adds. The $100 million Breakthrough Listen project is searching for any evidence of life in nearby star systems, which includes exoplanets out to a few hundred light years away.

"If six months from now, we start to see some interesting signals, we'll probably accelerate the Message program," he says.

The fact that there have been no signals yet does pose a conundrum. In a galaxy chock full of worlds, why isn't Earth crawling with alien visitors? The silence amid the presence of such plentiful planets is called the Fermi Paradox, named for the physicist Enrico Fermi, who first asked "Where is everybody?" in 1950.

In the decades since, astronomers have come up with possible explanations ranging from sociology to biological complexity. Aliens might be afraid of us, or consider us unworthy of attention, for instance. Or it may be that aliens communicate in ways that we can't comprehend, so we're just not hearing them. Or maybe aliens lack communication capability of any kind. Of course there's also the possibility that there are no aliens.

Image: Stephen Hawking
Stephen Hawking announces the "Breakthrough Starshot" initiative in New York in 2016. Dennis Van Tine / Star Max/IPx via AP
But those questions don't address the larger one: Whether it's a good idea to find out. Some scientists, most notably Stephen Hawking, are convinced the answer is a firm "No."

"We only have to look at ourselves to see how intelligent life might develop into something we wouldn't want to meet," Hawking said in 2010. He has compared meeting aliens to Christopher Columbus meeting Native Americans: "That didn't turn out so well," he said.

Others have warned of catastrophic consequences ripped from the pages of science fiction: Marauding aliens that could follow our message like a homing beacon, and come here to exploit Earth's resources, exploit humans, or even to destroy all life as we know it.

"Any civilization detecting our presence is likely to be technologically very advanced, and may not be disposed to treat us nicely. At the very least, the idea seems morally questionable," physicist Mark Buchanan argued in the journal Nature Physics last fall.

Other astronomers think it's worth the risk — and they add, somewhat darkly, that it's too late anyway. We are a loud species, and our messages have been making their way through the cosmos since the dawn of radio.

"If we are in danger of an alien invasion, it's too late," wrote Douglas Vakoch, the director of Messaging Extraterrestrial Intelligence (METI) International, in a rebuttal last fall in Nature Physics. Vakoch, the most prominent METI proponent, argues that if we don't tell anyone we're here, we could miss out on new technology that could help humanity, or even protect us from other, less friendly aliens.

“If we are in danger of an alien invasion, it’s too late.”
David Grinspoon, an author and astrobiologist at the Planetary Science Institute in Tucson, says he first thought, "'Oh, come on, you've got to be kidding me.' It seems kind of absurd aliens are going to come invade us, steal our precious bodily fluids, breed us like cattle, 'To Serve Man,' " a reference to a 1962 episode of "The Twilight Zone" in which aliens hatch a plan to use humans as a food source.

Originally, Grinspoon thought there would be no harm in setting up a cosmic lighthouse. "But I've listened to the other side, and I think they have a point," he adds. "If you live in a jungle that might be full of hungry lions, do you jump down from your tree and go, 'Yoo-hoo?'"

Many have already tried, albeit some more seriously than others.

In 2008, NASA broadcast the Beatles tune "Across the Universe" toward Polaris, the North Star, commemorating the space agency's 50th birthday, the 45th anniversary of the Deep Space Network, and the 40th anniversary of that song.

Later that year, a tech startup working with Ukraine's space agency beamed pictures and messages to the exoplanet Gliese 581 c. Other, sillier messages to the stars have included a Doritos commercial and a bunch of Craigslist ads.

Last October, the European Space Agency broadcast 3,775 text messages toward Polaris. It's not known to harbor any exoplanets, and even if it did, those messages would take some 425 years to arrive; yet the exercise, conceived by an artist, raised alarm among astronomers. Several prominent scientists, including Walkowicz, signed on to a statement guarding against any future METI efforts until some sort of international consortium could reach agreement.

Play Is an Alien Megastructure Causing this Distant Star's Strange Behavior? Facebook Twitter Google PlusEmbed
Is an Alien Megastructure Causing this Distant Star's Strange Behavior? 1:58
Even if we don't send a carefully crafted message, we're already reaching for the stars. The Voyager probe is beyond the solar system in interstellar space, speeding toward a star 17.6 light-years from Earth. Soon, if Milner has his way, we may be sending even more robotic emissaries.

Milner's $100 million Breakthrough Starshot aims to send a fleet of paper-thin space chips to the Alpha Centauri system within a generation's time. Just last fall, astronomers revealed that a potentially rocky, Earth-sized planet orbits Proxima Centauri, a small red dwarf star in that system and the nearest to our own, just four light years away. The chips would use a powerful laser to accelerate to near the speed of light, to cover the distance between the stars in just a few years. A team of scientists and engineers is working on how to build the chips and the laser, according to Worden.

"If we find something interesting, obviously we're going to get a lot more detail if we can visit, and fly by," he says. "Who knows what's possible in 50 years?"

But some time sooner than that, we will need to decide whether to say anything at all. Ultimately, those discussions are important for humanity, Worden, Walkowicz and Grinspoon all say.

"Maybe it's more important that we get our act together on Earth," Grinspoon says. "We are struggling to find a kind of global identity on this planet that will allow us to survive the problems we've created for ourselves. Why not treat this as something that allows us to practice that kind of thinking and action?"
(FULL STORY)

Scientists May Have Solved the Biggest Mystery of the Big Bang
[2/4/2017]
THE UNANSWERED QUESTION

The European Council for Nuclear Research (CERN) works to help us better understand what comprises the fabric of our universe. At this French association, engineers and physicists use particle accelerators and detectors to gain insight into the fundamental properties of matter and the laws of nature. Now, CERN scientists may have found an answer to one of the most pressing mysteries in the Standard Model of Physics, and their research can be found in Nature Physics.

According to the Big Bang Theory, the universe began with the production of equal amounts of matter and antimatter. Since matter and antimatter cancel each other out, releasing light as they destroy each other, only a minuscule number of particles (mostly just radiation) should exist in the universe. But, clearly, we have more than just a few particles in our universe. So, what is the missing piece? Why is the amount of matter and the amount of antimatter so unbalanced?


The Standard Model of particle physics does account for a small percentage of this asymmetry, but the majority of the matter produced during the Big Bang remains unexplained. Noticing this serious gap in information, scientists theorized that the laws of physics are not the same for matter and antimatter (or particles and antiparticles). But how do they differ? Where do these laws separate?

This separation, known as charge-parity (CP) violation, has been seen in hadronic subatomic particles (mesons), but the particles in question are baryons. Finding evidence of CP violation in these particles would allow scientists to calculate the amount of matter in the universe, and answer the question of why we have an asymmetric universe. After decades of effort, the scientists at CERN think they’ve done just that.

Using a Large Hadron Collider (LHC) detector, CERN scientists were able to witness CP violation in baryon particles. When smashed together, the matter (Λb0) and antimatter (Λb0-bar) versions of the particles decayed into different components with a significant difference in the quantities of the matter and antimatter baryons. According to the team’s report, “The LHCb data revealed a significant level of asymmetries in those CP-violation-sensitive quantities for the Λb0 and Λb0-bar baryon decays, with differences in some cases as large as 20 percent.”

WHAT DOES THIS MEAN?

This discovery isn’t yet statistically significant enough to claim that it is definitive proof of a CP variation, but most believe that it is only a matter of time. “Particle physics results are dragged, kicking and screaming, out of the noise via careful statistical analysis; no discovery is complete until the chance of it being a fluke is below one in a million. This result isn’t there yet (it’s at about the one-in-a-thousand level),” says scientist Chris Lee. “The asymmetry will either be quickly strengthened or it will disappear entirely. However, given that the result for mesons is well and truly confirmed, it would be really strange for this result to turn out to be wrong.”

This borderline discovery is one huge leap forward in fully understanding what happened before, during, and after the Big Bang. While developments in physics like this may seem, from the outside, to be technical achievements exciting only to scientists, this new information could be the key to unlocking one of the biggest mysteries in modern physics. If the scientists at CERN are able to prove that matter and antimatter do, in fact, obey separate laws of physics, science as we know it would change and we’ll need to reevaluate our understanding of our physical world.
References: ScienceAlert - Latest, Science Alert
(FULL STORY)

New Research Shows the Universe May Have Once Been a Hologram
[1/31/2017]
A TWO-DIMENSIONAL BOUNDARY

New research suggests that the universe may have been a hologram at one point in time, specifically a few hundred thousand years after the Big Bang. The study, published in the journal Physical Review Letters, is the latest research on the “holographic principle,” which suggests that the laws of physics can apply to the universe as a two-dimensional plane.

“We are proposing using this holographic universe, which is a very different model of the Big Bang than the popularly accepted one that relies on gravity and inflation,” said lead author Niayesh Afshordi, professor of physics and astronomy at the University of Waterloo and Perimeter Institute. “Each of these models makes distinct predictions that we can test as we refine our data and improve our theoretical understanding – all within the next five years.”

The theory suggests that the volume of space appears three-dimensional, but is actually encoded on a two-dimensional boundary or an observer-dependent horizon that requires one less dimension that it appears. In short, we see it as three-dimensional, but it is projected from a two-dimensional source, similar to how a hologram screen works.

“The idea is similar to that of ordinary holograms, where a three-dimensional image is encoded in a two-dimensional surface, such as in the hologram on a credit card,” explained researcher Kostas Skenderis from the University of Southampton. “However, this time, the entire universe is encoded.”

MAKING SENSE OF COSMIC INFLATION

The researchers arrived at this conclusion after observing irregularities in the cosmic microwave background — the Big Bang’s remnant. The team used a model with one time and two space dimensions. Actual data from the universe, including cosmic microwave background observations, were then plugged into the model. The researchers saw that the two fit perfectly, but only if the universe is no more than 10 degrees wide.

“I would say you don’t live in a hologram, but you could have come out of a hologram,” Afshordi told Gizmodo. “[In 2017], there are definitely three dimensions.”

While many accept the cosmic inflation that came after the Big Bang, our understanding of physics – including current general relativity and quantum mechanics theories – doesn’t work with what we observe. The fundamental laws of physics are incapable of explaining how the universe as we know it, with all its contents, could’ve fit in a small package that exponentially expanded.

This is where Afshordi’s research and the holographic model come in. These could lead to new theories about the Big Bang and a functioning theory of quantum gravity — a theory that meshes quantum mechanics with Einstein’s theory of gravity. “The key to understanding quantum gravity is understanding field theory in one lower dimension,” Afshordi says. “Holography is like a Rosetta Stone, translating between known theories of quantum fields without gravity and the uncharted territory of quantum gravity itself.”

The question remains, though: how did the universe transition from 2D to 3D? Further study is needed to explain this.
(FULL STORY)

Dark energy emerges when energy conservation is violated
[1/18/2017]
The conservation of energy is one of physicists' most cherished principles, but its violation could resolve a major scientific mystery: why is the expansion of the universe accelerating? That is the eye-catching claim of a group of theorists in France and Mexico, who have worked out that dark energy can take the form of Albert Einstein's cosmological constant by effectively sucking energy out of the cosmos as it expands.

The cosmological constant is a mathematical term describing an anti-gravitational force that Einstein had inserted into his equations of general relativity in order to counteract the mutual attraction of matter within a static universe. It was then described by Einstein as his "biggest blunder", after it was discovered that the universe is in fact expanding. But then the constant returned to favour in the late 1990s following the discovery that the universe's expansion is accelerating.

For many physicists, the cosmological constant is a natural candidate to explain dark energy. Since it is a property of space–time itself, the constant could represent the energy generated by the virtual particles that quantum mechanics dictates continually flit into and out of existence. Unfortunately the theoretical value of this "vacuum energy" is up to a staggering 120 orders of magnitude larger than observations of the universe's expansion imply.

The latest work, carried out by Alejandro Perez and Thibaut Josset of Aix Marseille University together with Daniel Sudarsky of the National Autonomous University of Mexico, proposes that the cosmological constant is instead the running total of all the non-conserved energy in the history of the universe. The "constant" in fact would vary – increasing when energy flows out of the universe and decreasing when it returns. However, the constant would appear unchanging in our current (low-density) epoch because its rate of change would be proportional to the universe's mass density. In this scheme, vacuum energy does not contribute to the cosmological constant.

The researchers had to look beyond general relativity because, like Newtonian mechanics, it requires energy to be conserved. Strictly speaking, relativity requires the conservation of a multi-component "energy-momentum tensor". That conservation is manifest in the fact that, on very small scales, space–time is flat, even though Einstein's theory tells us that mass distorts the geometry of space–time.


In contrast, most attempts to devise a theory of quantum gravity require space–time to come in discrete grains at the smallest (Planck-length) scales. That graininess opens the door to energy non-conservation. Unfortunately, no fully formed quantum-gravity theory exists yet, and so the trio instead turned to a variant of general relativity known as unimodular gravity, which allows some violation of energy conservation. They found that when they constrained the amount of energy that can be lost from (or gained by) the universe to be consistent with the cosmological principle – on very large scales the process must be both homogeneous and isotropic – the unimodular equations generated a cosmological-constant-like entity.

In the absence of a proper understanding of Planck-scale space–time graininess, the researchers were unable to calculate the exact size of the cosmological constant. Instead, they incorporated the unimodular equations into a couple of phenomenological models that exhibit energy non-conservation. One of these describes how matter might propagate in granular space–time, while the other modifies quantum mechanics to account for the disappearance of superposition states at macroscopic scales.

These models both contain two free parameters, which were adjusted to make the models consistent with null results from experiments that have looked for energy non-conservation in our local universe. Despite this severe constraint, the researchers found that the models generated a cosmological constant of the same order of magnitude as that observed. "We are saying that even though each individual violation of energy conservation is tiny, the accumulated effect of these violations over the very long history of the universe can lead to dark energy and accelerated expansion," Perez says.

In future, he says it might be possible to subject the new idea to more direct tests, such as observing supernovae very precisely to try to work out whether the universe's accelerating expansion is driven by a constant or varying force. The model could also be improved so that it captures dark-energy's evolution from just after the Big Bang – and then comparing the results with observations of the cosmic microwave background.

If the trio are ultimately proved right, it would not mean physicists having to throw their long-established conservation principles completely out of the window. A variation in the cosmological constant, Perez says, could point to a deeper, more abstract kind of conservation law. "Just as heat is energy stored in the chaotic motion of molecules, the cosmological constant would be 'energy' stored in the dynamics of atoms of space–time," he explains. "This energy would only appear to be lost if space–time is assumed to be smooth."

Other physicists are cautiously supportive of the new work. George Ellis of the University of Cape Town in South Africa describes the research as "no more fanciful than many other ideas being explored in theoretical physics at present". The fact that the models predict energy to be "effectively conserved on solar-system scales" – a crucial check, he says – makes the proposal "viable" in his view.

Lee Smolin of the Perimeter Institute for Theoretical Physics in Canada, meanwhile, praises the researchers for their "fresh new idea", which he describes as "speculative, but in the best way". He says that the proposal is "probably wrong", but that if it's right "it is revolutionary".
The research is described in Physical Review Letters.
(FULL STORY)

Physicists measure the loss of dark matter since the birth of the universe
[12/28/2016]
Russian scientists have discovered that the proportion of unstable particles in the composition of dark matter in the days immediately following the Big Bang was no more than 2 percent to 5 percent. Their study has been published in Physical Review D.

"The discrepancy between the cosmological parameters in the modern universe and the universe shortly after the Big Bang can be explained by the fact that the proportion of dark matter has decreased. We have now, for the first time, been able to calculate how much dark matter could have been lost, and what the corresponding size of the unstable component would be," says co-author Igor Tkachev of the Department of Experimental Physics at INR.
Astronomers first suspected that there was a large proportion of hidden mass in the universe back in the 1930s, when Fritz Zwicky discovered "peculiarities" in a cluster of galaxies in the constellation Coma Berenices—the galaxies moved as if they were under the effect of gravity from an unseen source. This hidden mass, which is only deduced from its gravitational effect, was given the name dark matter. According to data from the Planck space telescope, the proportion of dark matter in the universe is 26.8 percent; the rest is "ordinary" matter (4.9 percent) and dark energy (68.3 percent).
The nature of dark matter remains unknown. However, its properties could potentially help scientists to solve a problem that arose after studying observations from the Planck telescope. This device accurately measured the fluctuations in the temperature of the cosmic microwave background radiation—the "echo" of the Big Bang. By measuring these fluctuations, the researchers were able to calculate key cosmological parameters using observations of the universe in the recombination era—approximately 300,000 years after the Big Bang.
However, when researchers directly measured the speed of the expansion of galaxies in the modern universe, it turned out that some of these parameters varied significantly—namely the Hubble parameter, which describes the rate of expansion of the universe, and also the parameter associated with the number of galaxies in clusters. "This variance was significantly more than margins of error and systematic errors known to us. Therefore, we are either dealing with some kind of unknown error, or the composition of the ancient universe is considerably different to the modern universe," says Tkachev.
Russian physicists measure the loss of dark matter since the birth of the universe
The concentration of the unstable component of dark matter F against the speed of expansion of non-gravitationally bound objects (proportional to the age of the Universe) when examining various combinations of Planck data for several different cosmological phenomena.

The discrepancy can be explained by the decaying dark matter (DDM) hypothesis, which states that in the early universe, there was more dark matter, but then part of it decayed.


"Let us imagine that dark matter consists of several components, as in ordinary matter (protons, electrons, neutrons, neutrinos, photons). And one component consists of unstable particles with a rather long lifespan. In the era of the formation of hydrogen, hundreds of thousands of years after the Big Bang, they are still in the universe, but by now (billions of years later), they have disappeared, having decayed into neutrinos or hypothetical relativistic particles. In that case, the amount of dark matter in the era of hydrogen formation and today will be different," says lead author Dmitry Gorbunov, a professor at MIPT and staff member at INR.
The authors of the study analyzed Planck data and compared them with the DDM model and the standard ΛCDM (Lambda-cold dark matter) model with stable dark matter. The comparison showed that the DDM model is more consistent with the observational data. However, the researchers found that the effect of gravitational lensing (the distortion of cosmic microwave background radiation by a gravitational field) greatly limits the proportion of decaying dark matter in the DDM model.
Using data from observations of various cosmological effects, the researchers were able to give an estimate of the relative concentration of the decaying components of dark matter in the region of 2 percent to 5 percent.
"This means that in today's universe, there is 5 percent less dark matter than in the recombination era. We are not currently able to say how quickly this unstable part decayed; dark matter may still be disintegrating even now, although that would be a different and considerably more complex model," says Tkachev.

More information: A. Chudaykin et al, Dark matter component decaying after recombination: Lensing constraints with Planck data, Physical Review D (2016). DOI: 10.1103/PhysR
(FULL STORY)

This star has a secret – even better than 'alien megastructures'
[1/13/2017]
When Yale researcher Tabetha Boyajian first focused on the star KIC 8462852 via the Kepler Space Telescope in September 2015, she didn't know what to make of it.

The lighting of the star was mysterious – it was far too dim for a star of its age and type, intermittently dipping in brightness. Theories around Tabby’s star, as it was nicknamed, quickly piled up, with some scientists attributing the atypical lighting to surrounding cosmic dust or nearby comets. But more excitable space enthusiasts predicted alien activity, arguing that only orbiting alien structures could block a star’s light so effectively.

The so-called alien megastructure hypothesis persisted longer than most extra-terrestrial-based theories, simply because scientists had few alternative ideas to explain the star's peculiar blinking – until now. And the latest theory is almost as intriguing as the alien hypothesis.

Dr. Boyajian and her team weren't the first to spot the star: it was actually discovered in 1890. But their questions about the star's light pattern – and the subsequent alien-related theories – made the star, well, something of a star.

“We’d never seen anything like this star,” Boyajian told the Atlantic in October 2015. “It was really weird. We thought it might be bad data or movement on the spacecraft, but everything checked out.”


KIC 8462852's story became more intriguing in January 2016, New Scientist reports, when a comparison of the first image taken of Tabby's star, in 1890, with one taken in 1989 revealed that the star had dimmed 14 percent in the interim 100 years. And over one particularly confusing two-day period, the star dipped in brightness by 22 percent.

Tabby’s star kept scientists scratching their heads all last year. Volatility in light patterns are typical for young stars, but KIC 8462852 is mature.

“The steady brightness change in KIC 8462852 is pretty astounding,” Ben Montet, a scientist at the California Institute of Technology, said in an October statement. “It is unprecedented for this type of star to slowly fade for years, and we don’t see anything else like it in the Kepler data.”

Now, a team of scientists from Columbia University and the University of California, Berkeley, say they have found a reasonable explanation to KIC 8462852’s strange lighting.

“Following an initial suggestion by Wright & Sigurdsson, we propose that the secular dimming behavior is the result of the inspiral of a planetary body or bodies into KIC 8462852, which took place ~ 10-104 years ago (depending on the planet mass),” the three authors write in a study to be published Monday in the Monthly Notices of the Royal Astronomical Society.

“Gravitational energy released as the body inspirals into the outer layers of the star caused a temporary and unobserved brightening, from which the stellar flux is now returning to the quiescent state.”

In other words, KIC 8462852 ate a planet sometime in the past 10,000 years.

The theory goes like this:

If KIC 8462852 did eat a planet – which is extremely rare in the space world, unless a collision pushed the planet out of its orbit – the star’s brightness would increase for a short period of 200 to 10,000 years as it burned up the planet (short in star time, that is). But once the burning was complete, the star would go back to around its original level of brightness.

So we could be looking at KIC 8462852 during its post-planet digestion, as it dims back to normal, write the authors.

And KIC 8462852 could have been a messy eater, leaving crumbs – aka orbiting planet debris – that periodically block its light. “This paper puts a merger scenario on the table in a credible way,” Jason Wright, an astronomist at Penn State University, tells New Scientist. “I think this moves it into the top tier of explanations.”
(FULL STORY)

Testing theories of modified gravity
[1/12/2017]
The accelerated expansion of the universe is usually attributed to a mysterious dark energy, but there’s another conceivable explanation: modified gravity. Unmodified gravity—that is, Einstein’s general relativity—satisfactorily accounts for the dynamics of the solar system, where precision measurements can be made without the confounding influence of dark matter. Nor have any violations been detected in one of general relativity’s principal ingredients, the strong equivalence principle, which posits that inertial mass and gravitational mass are identical.

But those observational constraints are not ineluctable. In particular, a class of gravitational theories called Galileon models can also pass them. In 2012 Lam Hui and Alberto Nicolis of Columbia University devised a cosmic test that could refute or confirm the models. Their test hinges on the models’ central feature: an additional scalar field that couples to mass. The coupling can be characterized by a charge-like parameter, Q. For most cosmic objects, Q has the same value as the inertial mass. But for a black hole, whose mass arises entirely from its gravitational binding energy, Q is zero; the strong equivalence principle is violated.

Galaxies fall through space away from low concentrations of mass and toward high concentrations. The supermassive black holes at the centers of some galaxies are carried along with the flow. But if gravity has a Galileon component, the black hole feels less of a tug than do the galaxy’s stars, interstellar medium, and dark-matter particles. The upshot, Hui and Nicolis realized, is that the black hole will lag the rest of the galaxy and slip away from its center. The displacement is arrested when the black hole reaches the point where the lag is offset by the presence of more of the galaxy’s gravitational mass on one side of the black hole than on the other. Given the right circumstances, the displacement can be measured.

Hui and Nicolis’s proposal has now itself been put to the test. Asha Asvathaman and Jeremy Heyl of the University of British Columbia, together with Hui, have applied it to two galaxies: M32, which is being pulled toward its larger neighbor, the Andromeda galaxy, and M87 (shown here), which is being pulled through the Virgo cluster of galaxies. Both M32 and M87 are elliptical galaxies. Because of their simple shapes, their centroids can be determined from optical observations. The locations of their respective black holes can be determined from radio observations. Although the limit on Galileon gravity that Asvathaman, Heyl, and Hui derived was too loose to refute or confirm the theory, they nevertheless validated the test itself. More precise astrometric observations could make it decisive. (A. Asvathaman, J. S. Heyl, L. Hui, Mon. Not. R. Astron. Soc., in press.)
(FULL STORY)

A simple explanation of mysterious space-stretching ‘dark energy?’
[1/10/2017]
For nearly 2 decades, cosmologists have known that the expansion of the universe is accelerating, as if some mysterious "dark energy" is blowing it up like a balloon. Just what dark energy is remains one of the biggest mysteries in physics. Now, a trio of theorists argues that dark energy could spring from a surprising source. Weirdly, they say, dark energy could come about because—contrary to what you learned in your high school physics class—the total amount of energy in the universe isn't fixed, or "conserved," but may gradually disappear.

"It's a great direction to explore," says George Ellis, a theorist at the University of Cape Town in South Africa, who was not involved in the work. But Antonio Padilla, a theorist at the University of Nottingham in the United Kingdom, says, "I don't necessarily buy what they've done."

Dark energy could be a new field, a bit like an electric field, that fills space. Or it could be part of space itself—a pressure inherent in the vacuum—called a cosmological constant. The second scenario jibes well with Einstein's theory of general relativity, which posits that gravity arises when mass and energy warps space and time. In fact, Einstein invented the cosmological constant—literally by adding a constant to his famous differential equations—to explain how the universe resisted collapsing under its own gravity. But he gave up on the idea as unnecessary when in the 1920s astronomers discovered that the universe isn't static, but is expanding as if born in an explosion.

With the observation that the expansion of the universe is accelerating, the cosmological constant has made a comeback. Bring in quantum mechanics and the case for the cosmological constant gets tricky, however. Quantum mechanics suggests the vacuum itself should fluctuate imperceptibly. In general relativity, those tiny quantum fluctuations produce an energy that would serve as the cosmological constant. Yet, it should be 120 orders of magnitude too big—big enough to obliterate the universe. So explaining why there is a cosmological constant, but just a little bitty one, poses a major conceptual puzzle for physicists. (When there was no need for a cosmological constant theorists assumed that some as-yet-unknown effect simply nailed it to zero.)

Now, Thibault Josset and Alejandro Perez of Aix-Marseille University in France and Daniel Sudarsky of the National Autonomous University of Mexico in Mexico City say they have found a way to get a reasonable value for the cosmological constant. They begin with a variant of general relativity that Einstein himself invented called unimodular gravity. General relativity assumes a mathematical symmetry called general covariance, which says that no matter how you label or map spacetime coordinates—i.e. positions and times of events—the predictions of the theory must be the same. That symmetry immediately requires that energy and momentum are conserved. Unimodular gravity possesses a more limited version of that mathematical symmetry.

Unimodular gravity reproduces most of the predictions of general relativity. However, in it quantum fluctuations of the vacuum do not produce gravity or add to the cosmological constant, which is once again just a constant that can be set to the desired value. There's a cost, however. Unimodular gravity doesn't require energy to be conserved, so theorists have to impose that constraint arbitrarily.

Now, however, Josset, Perez, and Sudarsky show that in unimodular gravity, if they just go with it and allow the violation of the conservation of energy and momentum, it actually sets the value of the cosmological constant. The argument is mathematical, but essentially the tiny bit of energy that disappears in the universe leaves its trace by gradually changing the cosmological constant. "In the model, dark energy is something that keeps track of how much energy and momentum has been lost over the history of the universe," Perez says.

To show that the theory gives reasonable results, the theorists consider two scenarios of how the violation of energy conservation might come about in theories that address foundational issues in quantum mechanics. For example, a theory called continuous spontaneous localization (CSL) tries to explain why a subatomic particle like an electron can literally be in two places at once, but a big object like a car cannot. CSL assumes that such two-places-at-once states spontaneously collapse to one place or the other with a probability that increases with an object's size, making it impossible for a large object to stay in the two-place state. The knock against CSL is that it doesn't conserve energy. But the theorists show that the amount that energy conservation is violated would be roughly enough to give a cosmological constant of the right size.

The work's novelty lies in using the violation of conservation of energy to tie dark energy to possible extensions of quantum theory, says Lee Smolin, a theorist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. "It's in no way definitive," he says. "But it's an interesting hypothesis that unites these two things, which to my knowledge nobody has tried to connect before."

However, Padilla says the theorists are playing mathematical sleight-of-hand. They still have to assume that the cosmological constant starts with some small value that they don't explain, he says. But Ellis notes that physics abounds with unexplained constants such as the charge of the electron or the speed of light. "This just adds one more constant to the long list.”

Padilla also argues that the work runs contrary to the idea that phenomena on the biggest scales should not depend on those at the smallest scales. "You're trying to describe something on the scale of the universe," he says. "Do you really expect it to be sensitive to the details of quantum mechanics?" But Smolin argues that the cosmological constant problem already links the cosmic and quantum realms. So, he says, "It's a new idea that could possibly be right and thus is worth getting interested in."
(FULL STORY)

Physicists detect exotic looped trajectories of light in three-slit experiment
[1/6/2017]
Physicists have performed a variation of the famous 200-year-old double-slit experiment that, for the first time, involves "exotic looped trajectories" of photons. These photons travel forward through one slit, then loop around and travel back through another slit, and then sometimes loop around again and travel forward through a third slit.

Interestingly, the contribution of these looped trajectories to the overall interference pattern leads to an apparent deviation from the usual form of the superposition principle. This apparent deviation can be understood as an incorrect application of the superposition principle—once the additional interference between looped and straight trajectories is accounted for, the superposition can be correctly applied.

The team of physicists, led by Omar S. Magaña-Loaiza and Israel De Leon, has published a paper on the new experiment in a recent issue of Nature Communications.

Loops of light

"Our work is the first experimental observation of looped trajectories," De Leon told Phys.org. "Looped trajectories are extremely difficult to detect because of their low probability of occurrence. Previously, researchers had suggested that these exotic trajectories could exist but failed to observe them."

To increase the probability of the occurrence of looped trajectories, the researchers designed a three-slit structure that supports surface plasmons, which the scientists describe as "strongly confined electromagnetic fields that can exist at the surface of metals." The presence of these electromagnetic fields near the three slits increases the contribution of looped trajectories to the overall interference pattern by almost two orders of magnitude.

"We provided a physical explanation that links the probability of these exotic trajectories to the near fields around the slits," De Leon said. "As such, one can increase the strength of near fields around the slits to increase the probability of photons following looped trajectories."

Superposition principle accounting for looped trajectories

The new three-slit experiment with looped trajectories is just one of many variations of the original double-slit experiment, first performed by Thomas Young in 1801. Since then, researchers have been performing versions that use electrons, atoms, or molecules instead of photons.

One of the reasons why the double-slit experiment has attracted so much attention is that it represents a physical manifestation of the principle of quantum superposition. The observation that individual particles can create an interference pattern implies that the particles must travel through both slits at the same time. This ability to occupy two places, or states, at once, is the defining feature of quantum superposition.

Straight trajectories (green) and exotic looped trajectories (red, dashed, dotted) of light, where the red cloud near the surface depicts the near fields, which increase the probability of photons to follow looped trajectories. The graphs at left show simulations (top) and experimental results (bottom) of the large difference in interference patterns created by illuminating only one slit being treated independently (gray line) and the actual coupled system (blue line). The remarkable difference between the gray and blue lines is caused by the looped trajectories. Credit: Magaña-Loaiza et al. Nature Communications
So far, all previous versions of the experiment have produced results that appear to be accurately described by the principle of superposition. This is because looped trajectories are so rare under normal conditions that their contribution to the overall interference pattern is typically negligible, and so applying the superposition principle to those cases results in a very good approximation.

It is when the contribution of the looped trajectories becomes non-negligible that it becomes apparent that the total interference is not simply the superposition of individual wavefunctions of photons with straight trajectories, and so the interference pattern is not correctly described by the usual form of the superposition principle.

Magaña-Loaiza explained this apparent deviation in more detail:

"The superposition principle is always valid—what is not valid is the inaccurate application of the superposition principle to a system with two or three slits," he said.

"For the past two centuries, scientists have assumed that one cannot observe interference if only one slit is illuminated in a two- or three-slit interferometer, and this is because this scenario represents the usual or typical case.

"However, in our paper we demonstrate that this is true only if the probability of photons to follow looped trajectories is negligible. Surprisingly, interference fringes are formed when photons following looped trajectories interfere with photons following straight (direct) trajectories, even when only one of the three slits is illuminated.

"The superposition principle can be applied to this surprising scenario by using the sum or 'superposition' of two wavefunctions; one describing a straight trajectory and the other describing looped trajectories. Not taking into account looped trajectories would represent an incorrect application of the superposition principle.

"To some extent, this effect is strange because scientists know that Thomas Young observed interference when he illuminated both slits and not only one. This is true only if the probability of photons following looped trajectories is negligible."

In addition to impacting physicists' understanding of the superposition principle as it is applied to these experiments, the results also reveal new properties of light that could have applications for quantum simulators and other technologies that rely on interference effects.

"We believe that exotic looped paths can have important implications in the study of decoherence mechanisms in interferometry or to increase the complexity of certain protocols for quantum random walks, quantum simulators, and other algorithms used in quantum computation," De Leon said.
(FULL STORY)

Actual footage shows what it was like to land on Saturn's moon Titan
[1/12/2017]
In 2005, an alien probe flew through the hazy and cold atmosphere of Titan, the largest moon of Saturn, and landed on the world's surface.

That spacecraft — named the Huygens probe — was sent from Earth by the European Space Agency along with the Cassini spacecraft to help humanity learn more about Saturn and its 53 known moons.

SEE ALSO: These photos of a hexagon on Saturn are totally real

Thanks to a new video released by NASA, you can relive the Huygens' descent to Titan's surface 12 years after it actually landed.

The video shows actual footage from the spacecraft's point of view as it passed through the hazy layers of Titan's atmosphere, spotted "drainage canals" that suggest rivers of liquid methane run on the moon and gently set down on the surface, NASA said.
(FULL STORY)

Quaternions are introduced, October 16, 1843
[10/16/2016]
Irish physicist, astronomer, and mathematician Sir William Rowan Hamilton introduced quaternions, a non-commutative extension of complex numbers, on October 16, 1843.

To be true, Benjamin Olinde Rodrigues had in 1840 already reached a result that amounted to the discovery of quaternions in all but name. His work on the subject was published years after Hamilton’s.

As the story goes, Hamilton knew that complex numbers could be interpreted as points in a plane, and he was looking for a way to do the same for points in three-dimensional space. It had been established that points in space can be represented by their coordinates, which are triples of numbers, and for many years Hamilton had known how to add and subtract triples of numbers. However, Hamilton had been stuck on the problem of multiplication and division for a long time. He could not figure out how to calculate the quotient of the coordinates of two points in space.

On Monday, October 16, 1843, Hamilton was walking with his wife to the Royal Irish Academy where he was going to preside at a council meeting. The concepts behind quaternions began forming in his mind. When the answer came to him, Hamilton carved the formula for the quaternions into the stone of Dublin’s Brougham Bridge (Broom Bridge).

A plaque on the bridge commemorates the event:


http://math.ucr.edu/home/baez/

The next day, Hamilton wrote a letter to his friend and fellow mathematician John T Graves describing the train of thought that led to his discovery. The letter was published in the London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science in 1884, and states:

And here there dawned on me the notion that we must admit, in some sense, a fourth dimension of space for the purpose of calculating with triples. ... An electric circuit seemed to close, and a spark flashed forth.

Hamilton called a quadruple with these rules of multiplication a quaternion, and he devoted most of the remainder of his life to studying and teaching them.

Although controversy about the use of quaternions was notable in the late 19th century while vector algebra and vector calculus grew in popularity, they were made a mandatory examination topic in Dublin and for a while were the only advanced mathematics taught in some American universities.

Quaternions experienced a revival in the late 20th century, primarily due to their utility in describing spatial rotations.

Today, the quaternions are used in computer graphics, control theory, signal processing, and orbital mechanics, mainly for representing rotations/orientations. It is common for spacecraft attitude-control systems to be commanded in terms of quaternions, which are also used to telemeter their current attitude.
(FULL STORY)

The Sound Of Quantum Vacuum
[12/21/2016]
Quantum mechanics dictates sensitivity limits in the measurements of displacement, velocity and acceleration. A recent experiment at the Niels Bohr Institute probes these limits, analyzing how quantum fluctuations set a sensor membrane into motion in the process of a measurement. The membrane is an accurate model for future ultraprecise quantum sensors, whose complex nature may even hold the key to overcome fundamental quantum limits. The results are published in the prestigious scientific journal, Proceedings of the National Academy of Sciences of the USA.

Vibrating strings and membranes are at the heart of many musical instruments. Plucking a string excites it to vibrations, at a frequency determined by its length and tension. Apart from the fundamental frequency - corresponding to the musical note - the string also vibrates at higher frequencies. These overtones influence how we perceive the 'sound' of the instrument, and allow us to tell a guitar from a violin. Similarly, beating a drumhead excites vibrations at a number of frequencies simultaneously.

These matters are not different when scaling down, from the half-meter bass drum in a classic orchestra to the half-millimeter-sized membrane studied recently at the Niels Bohr Institute. And yet, some things are not the same at all: using sophisticated optical measurement techniques, a team lead by Professor Albert Schliesser could show that the membrane’s vibrations, including all its overtones, follow the strange laws of quantum mechanics. In their experiment, these quantum laws implied that the mere attempt to precisely measure the membrane vibrations sets it into motion. As if looking at a drum already made it hum!

A 'drum' with many tones
Although the membrane investigated by the Niels Bohr Institute team can be seen with bare eyes, the researchers used a laser to accurately track the membrane motion. And this indeed reveals a number of vibration resonances, all of which are simultaneously measured. Their frequencies are in the Megahertz range, about a thousand times higher than the sound waves we hear, essentially because the membrane is much smaller than a musical instrument. But the analogies carry on: just like a violin sounds different depending on where the string is struck (sul tasto vs sul ponticello), the researchers could tell from the spectrum of overtones at which location their membrane was excited by the laser beam.

Yet, observing the subtle quantum effects that the researchers were most interested in, required a few more tricks. Albert Schliesser explains: “For once, there is the problem of vibrational energy loss, leading to what we call quantum decoherence. Think of it this way: in a violin, you provide a resonance body, which picks up the string vibrations and transforms them to sound waves carried away by the air. That’s what you hear. We had to achieve exactly the opposite: confine the vibrations to the membrane only, so that we can follow its undisturbed quantum motion for as long as possible. For that we had to develop a special ‘body’ that cannot vibrate at the membrane's frequencies.”

This was achieved by a so-called phononic crystal, a regular pattern of holes that exhibits a phononic bandgap, that is, a band of frequencies at which the structure cannot vibrate. Yeghishe Tsaturyan, a PhD student on the team, realized a membrane with such a special body at the Danchip nanofabrication facilities in Lyngby.

A second challenge consists in making sufficiently precise measurements. Using techniques from the field of Optomechanics, which is Schliesser’s expertise, the team created a dedicated experiment at the Niels Bohr Institute, based on a laser custom-built to their needs, and a pair of highly reflecting mirrors between which the membrane is arranged. This allowed them to resolve vibrations with amplitudes much smaller than a proton’s radius (1 femtometer).

“Making measurements so sensitive is not easy, in particular since pumps and other lab equipment vibrates with much larger amplitudes. So we have to make sure this doesn't show in our measurement record,” adds PhD student William Nielsen.

Vacuum beats the 'drum'
Yet it is exactly the range of ultra-precision measurements where it gets interesting. Then, it starts to matter that, according to quantum mechanics, the process of measuring the motion also influences it. In the experiment, this 'quantum measurement backaction' is caused by the inevitable quantum fluctuations of the laser light. In the framework of quantum optics, these are caused by quantum fluctuations of the electromagnetic field in empty space (vacuum). Odd as it sounds, this effect left clear signatures in the Niels Bohr Institute experiment's data, namely strong correlations between the quantum fluctuations of the light, and the mechanical motion as measured by light.

“Observing and quantifying these quantum fluctuations is important to better understand how they can affect ultraprecision mechanical measurements - that is, measurements of displacement, velocity or acceleration. And here, the multi-mode nature of the membrane comes into play: not only is it a more accurate representation of real-world sensors. It may also contain the key to overcome some of the traditional quantum limits to measurement precision with more sophisticated schemes, exploiting quantum correlations”, Albert Schliesser says and adds, that in the long run, quantum experiments with ever more complex mechanical objects may also provide an answer to the question why we don't ever observe a bass drum in a quantum superposition (or will we?).

SOURCE: University of Copenhagen
(FULL STORY)

Multiple copies of the Standard Model could solve the hierarchy problem
[1/4/2017]
One of the unanswered questions in particle physics is the hierarchy problem, which has implications for understanding why some of the fundamental forces are so much stronger than others. The strengths of the forces are determined by the masses of their corresponding force-carrying particles (bosons), and these masses in turn are determined by the Higgs field, as measured by the Higgs vacuum expectation value.

So the hierarchy problem is often stated as a problem with the Higgs field: specifically, why is the Higgs vacuum expectation value so much smaller than the largest energy scales in the universe, in particular the scale at which gravity (by far the weakest of the forces) becomes strong? Reconciling this apparent discrepancy would impact physicists' understanding of particle physics at the most fundamental level.
"The hierarchy problem is one of the deepest questions in particle physics, and almost every one of its known solutions corresponds to a different vision of the universe," Raffaele Tito D'Agnolo, a physicist at Princeton, told Phys.org. "Identifying the correct answer will not just solve a conceptual puzzle, but will change the way we think about particle physics."
In a new paper published in Physical Review Letters, D'Agnolo and his coauthors have proposed a solution to the hierarchy problem that involves multiple (up to 1016) copies of the Standard Model, each with a different Higgs vacuum expectation value. In this model, the universe consists of many sectors, each of which is governed by its own version of the Standard Model with its own Higgs vacuum expectation value. Our sector is the one with the smallest nonzero value.
If, in the very early universe, all sectors had comparable temperatures and seemingly equal chances of dominating, why did our sector, with the smallest nonzero Higgs vacuum expectation value, come to dominate? The physicists introduce a new mechanism called a "reheaton field" that explains this by reheating the universe as it decays. The physicists show that there are several ways in which the reheaton field could have preferentially decayed into and deposited the majority of its energy into the sector with the smallest Higgs vacuum expectation value, causing this sector to eventually dominate and become our observable universe.
Compared to other proposed solutions to the hierarchy problem, such as supersymmetry and extra dimensions, the new proposal—which the physicists call "N-naturalness"—is different in that the solution does not rely solely on new particles. Although the new proposal shares some features with both supersymmetry and extra dimensions, one of its unique characteristics is that it is not only new particles, but more importantly cosmological dynamics, that is central to the solution.
"N-naturalness is qualitatively different from the solutions to the hierarchy problem proposed in the past, and it predicts signals in cosmic microwave background (CMB) experiments and large-scale structure surveys, two probes of nature that were thought to be unrelated to the problem," D'Agnolo said.
As the physicists explain, it should be possible to detect signatures of N-naturalness by searching for signs of the existence of other sectors. For instance, future CMB experiments might detect extra radiation and changes in neutrino cosmology, since neutrinos in nearby sectors are expected to be slightly heavier and less abundant than those in our sector. This approach is interesting for another reason: the neutrinos in the other sectors are also a viable dark matter candidate, which the researchers plan to study in more detail. Future experiments might also find signatures of N-naturalness in the form of a larger-than-expected mass of axion particles, as well as supersymmetric signatures due to possible connections to supersymmetry.
"If new relativistic species are not detected by the next generation of CMB experiments (Stage 4), then I will stop thinking of N-naturalness as a possible solution to the hierarchy problem," D'Agnolo said. "According to the current timeline, these experiments should start taking data around 2020 and reach their physics goals in approximately five years."
(FULL STORY)

Universe May Have Lost 'Unstable' Dark Matter
[12/30/2016]
The early universe may have contained more dark matter than there is today, new research suggests. The findings could help scientists better understand what the universe was like just after the Big Bang, researchers said.

Most of the matter in the universe seems to be invisible and largely intangible; it holds galaxies together and only interacts with the more familiar matter through its gravitational pull. Researchers call the strange stuff dark matter, and one of the biggest questions for astrophysicists is what it actually is and how it might evolve or decay. [Twisted Physics: 7 Mind-Blowing Findings]

New work by a team of Russian scientists may offer insight into that question. Dmitry Gorbunov, of the Moscow Institute of Physics and Technology; Igor Tkachev, head of the of the Department of Experimental Physics at the Institute for Nuclear Research in Russia; and Anton Chudaykin, of Novosibirsk State University in Russia considered whether some unstable dark matter might have decayed since the universe's early days, turning from whatever type of particle or particles make up dark matter — that's still unknown — into lighter particles.

"We have now, for the first time, been able to calculate how much dark matter could have been lost and what the corresponding size of the unstable component would be," Tkachev said in a statement.

Their new calculations suggest that no more than 5 percent of the current amount of dark matter in the universe, could have been lost since the Big Bang.

Besides suggesting new properties for the elusive dark matter, the work could be important in helping scientists understand how the universe has changed over time, the researchers said. For example, the findings may show how the universe's rate of expansion has varied and what happened in the universe's first few hundred thousand years, when matter as we know it started to form into atoms.

Mysterious matter

Dark matter is a kind of matter that has mass, so it exerts a gravitational pull. However, it doesn't interact through electromagnetism with ordinary matter, so it is invisible. That is, it doesn't reflect or absorb light. The lack of electrical charge also makes dark matter intangible. Physicists are still debating what kind of particles make up dark matter, but most researchers agree that the substance accounts for some four-fifths of the matter in the universe.

Researchers have said Planck telescope data shows only about 4.9 percent of the universe is ordinary matter, about 26.8 percent is dark matter, and the remaining 68.3 percent is dark energy, which accelerates universal expansion.

"We have now, for the first time, been able to calculate how much dark matter could have been lost and what the corresponding size of the unstable component would be," Tkachev said in a statement.

Their new calculations suggest that no more than 5 percent of the current amount of dark matter in the universe, could have been lost since the Big Bang.

Besides suggesting new properties for the elusive dark matter, the work could be important in helping scientists understand how the universe has changed over time, the researchers said. For example, the findings may show how the universe's rate of expansion has varied and what happened in the universe's first few hundred thousand years, when matter as we know it started to form into atoms.

Unstable universe

In its study, the team looked at data from the Planck space telescope, which studies the cosmic microwave background coming from a point located about 932,000 miles (1.5 million kilometers) from Earth. The cosmic microwave background is an "echo" of the Big Bang; it's the radiation from photons (light) that first started moving freely through the universe. By studying fluctuations in that radiation, it's possible to calculate the value of different parameters, such how fast the universe was expanding, at the time the radiation was emitted.

What they found was that the universe in its early days — about 300,000 years after it formed — behaved a bit differently than it does now. That conclusion comes from measuring the rate of expansion, as well as the number of galaxies in clusters, which are easier to explain if the amount of dark matter was anywhere from 2-5 percent greater than it is today.

To get that figure, the researchers compared the real universe with two models: one that assumed dark matter is stable and one that assumed the total amount of dark matter could change. The latter model did a better job of producing something like the universe seen today. So the early universe might have had two kinds of dark matter, the researchers said in a statement: one kind that decays into other particles and another that remains stable over billions of years.

We are not currently able to say how quickly this unstable part decayed; dark matter may still be disintegrating even now," Tkachev said in a statement.

In addition, by looking at gravitational lensing – the bending of light by massive objects -- of the background radiation, the researchers found an upper limit for how much of that dark matter had to decay, the scientists said. The study appears in the journal Physical Review D.
(FULL STORY)

Vera Rubin, Astronomer Who Did Pioneering Work on Dark Matter, Dies at 88
[12/26/2016]
Vera Rubin, a pioneering astronomer who helped find powerful evidence of dark matter, has died, her son said Monday.

She was 88.

Allan Rubin, a professor of geosciences at Princeton University, said his mother died Sunday night of natural causes. He said the Philadelphia native had been living in the Princeton area.

Vera Rubin found that galaxies don't quite rotate the way they were predicted, and that lent support to the theory that some other force was at work, namely dark matter.

Dark matter, which still hasn't been directly observed, makes up 27 percent of universe — as opposed to 5 percent of the universe being normal matter. Scientists better understand what dark matter isn't rather than what it is.

Rubin's scientific achievements earned her numerous awards and honors, including a National Medal of Science presented by President Bill Clinton in 1993 "for her pioneering research programs in observational cosmology." She also became the second female astronomer to be elected to the National Academy of Sciences.

"It goes without saying that, as a woman scientist, Vera Rubin had to overcome a number of barriers along the way," California Institute of Technology physicist Sean Carroll tweeted Monday.

Rubin's interest in astronomy began as a young girl and grew with the involvement of her father, Philip Cooper, an electrical engineer who helped her build a telescope and took her to meetings of amateur astronomers.

Although Rubin said her parents were extremely supportive of her career choice, she said in a 1995 interview with the American Institute of Physics that her father had suggested she become a mathematician, concerned that it would be difficult for her to make a living as an astronomer.

She was the only astronomy major to graduate from Vassar College in 1948. When she sought to enroll as a graduate student at Princeton, she learned women were not allowed in the university's graduate astronomy program, so she instead earned her master's degree from Cornell University.

Rubin earned her doctorate from Georgetown University, where she later worked as a faculty member for several years before working at the Carnegie Institution in Washington, a nonprofit scientific research center.

During her career, Rubin examined more than 200 galaxies.

"Vera Rubin was a national treasure as an accomplished astronomer and a wonderful role model for young scientists," said Matthew Scott, president of the Carnegie Institution. "We are very saddened by this loss."
(FULL STORY)

China's Hunt for Signals From the Dark Universe
[12/19/2016]
“So far we have collected about 1.8 billion cosmic rays, among them more than 1 million particles are high energy electrons,” Professor Fan Yizhong, a member of the mission team at the Purple Mountain Observatory in Nanjing, under the Chinese Academy of Sciences (CAS), told gbtimes. China’s dark matter-hunting satellite DAMPE celebrated its one year anniversary in space over the weekend, with the team now looking for unexpected results among collected data.

Launched on December 17, 2015, the 1,900kg DArk Matter Particle Explorer (DAMPE) has spent the year measuring the spectra of extremely energetic gamma-rays and cosmic ray with the aim of identifying possible Dark Matter signatures. DAMPE, which is also known as Wukong, after the monkey king in the Chinese fairytale Journey to the West, was carried on a Long March 2D booster, and placed in a 500km-altitude orbit.
Scientists reported on Monday, Dec. 21, 2015 that China’s ground stations received its first data DAMPE. The image above is an artistic rendering imagines the filaments of dark energy that make up parts of the cosmic web. Monstrous galaxies are thought to form at the nexuses of these filaments. (ALMA/ESO/NAOJ/NRAO) A Kashgar station situated in Xinjiang tracked and obtained data from “Wukong,”taking around seven minutes to receive and record the data. It was then transmitted to the National Space Science Center, reported the Chinese Academy of Sciences (CAS) in a statement.

DAMPE boasts of a massive surface area, not only capably observing high cosmic ray volumes but also surveying the sky at high energies. It uses four instruments for capturing the high-energy particles and tracing them back to their origin: a BGO calorimeter, a plastic scintillator detector, a neutron detector and a silicon-tungsten tracker. The particle sources are believed to be dark matter collisions, possibly giving scientists new insight into the dark matter that can potentially help scientists follow a wealth of scientific pursuits, including studying oceanic depths on icy moons and mapping out layers of celestial bodies.

“[It’s] an exciting mission,” said Princeton University’s David Spergel of the DAMPE mission. A recent study in the Astrophysical Journal proposed that the solar system might be growing dark matter “hairs,” speculated to exist and sprout from Earth.

"When gravity interacts with the cold dark matter gas during galaxy formation, all particles within a stream continue traveling at the same velocity," explained Gary Prézeau of NASA's Jet Propulsion Laboratory, Pasadena, California, who proposes the existence of long filaments of dark matter, or "hairs."

Based on many observations of its gravitational pull in action, scientists are certain that dark matter exists, and have measured how much of it there is in the universe to an accuracy of better than one percent. The leading theory is that dark matter is "cold," meaning it doesn't move around much, and it is "dark" insofar as it doesn't produce or interact with light.

Galaxies, which contain stars made of ordinary matter, form because of fluctuations in the density of dark matter. Gravity acts as the glue that holds both the ordinary and dark matter together in galaxies.

According to calculations done in the 1990s and simulations performed in the last decade, dark matter forms "fine-grained streams" of particles that move at the same velocity and orbit galaxies such as ours. A stream can be much larger than the solar system itself, and there are many different streams crisscrossing our galactic neighborhood," Prézeau said.

Prézeau likens the formation of fine-grained streams of dark matter to mixing chocolate and vanilla ice cream. Swirl a scoop of each together a few times and you get a mixed pattern, but you can still see the individual colors.

"When gravity interacts with the cold dark matter gas during galaxy formation, all particles within a stream continue traveling at the same velocity," Prézeau said.

But what happens when one of these streams approaches a planet such as Earth? Prézeau used computer simulations to find out. His analysis finds that when a dark matter stream goes through a planet, the stream particles focus into an ultra-dense filament, or "hair," of dark matter. In fact, there should be many such hairs sprouting from Earth.

A stream of ordinary matter would not go through Earth and out the other side. But from the point of view of dark matter, Earth is no obstacle. According to Prézeau's simulations, Earth's gravity would focus and bend the stream of dark matter particles into a narrow, dense hair.

Hairs emerging from planets have both "roots," the densest concentration of dark matter particles in the hair, and "tips," where the hair ends. When particles of a dark matter stream pass through Earth’s core, they focus at the "root" of a hair, where the density of the particles is about a billion times more than average. The root of such a hair should be around 600,000 miles (1 million kilometers) away from the surface, or twice as far as the moon. The stream particles that graze Earth's surface will form the tip of the hair, about twice as far from Earth as the hair’s root.

"If we could pinpoint the location of the root of these hairs, we could potentially send a probe there and get a bonanza of data about dark matter," Prézeau said.

A stream passing through Jupiter's core would produce even denser roots: almost 1 trillion times denser than the original stream, according to Prézeau's simulations.

"Dark matter has eluded all attempts at direct detection for over 30 years. The roots of dark matter hairs would be an attractive place to look, given how dense they are thought to be,” said Charles Lawrence, chief scientist for JPL’s astronomy, physics and technology directorate.

Another fascinating finding from these computer simulations is that the changes in density found inside our planet – from the inner core, to the outer core, to the mantle to the crust – would be reflected in the hairs. The hairs would have "kinks" in them that correspond to the transitions between the different layers of Earth.

Theoretically, if it were possible to obtain this information, scientists could use hairs of cold dark matter to map out the layers of any planetary body, and even infer the depths of oceans on icy moons.


DAMPE is testing the theory that dark matter particles may annihilate or decay and then produce high energy gamma-rays or cosmic rays - in particular electron/positron pairs – and DAMPE, with the widest observation spectrum and highest energy resolution of any dark matter probe in the world, will collect the evidence.

The data analysis of the DAMPE collaboration, which includes institutions from across China and international partners from Italy and Switzerland, has been concentrated on the high energy cosmic rays, in particular the electrons.

“We are looking forward to find something “unexpected” in the cosmic ray and gamma-ray spectra,” Fan says. The team is looking to publish their first results in early 2017.

DAMPE meanwhile will continue to scan in all directions for the second year of its three-year mission, before switching to focus on areas where dark matter may most likely to be observed in the third. The space craft carries four science payloads in total and has the potential to advance the understanding of the origin and propagation mechanism of high energy cosmic rays, as well as new discoveries in high energy gamma astronomy.

The Daily Galaxy via nasa.gov, gbtimes.com and theguardian.com
(FULL STORY)

Baylor Physics Ph.D. Graduate Quoted in "How Realistic Is the Interstellar Ship from 'Passengers'?"
[12/23/2016]
The movie "Passengers," which opened yesterday (Dec. 21), explores the fascinations and perils of interstellar travel, but could the kind of starship portrayed in the movie ever exist in real life?

The film begins on board the Starship Avalon, which is carrying more than 5,000 passengers to a distant, habitable planet known as Homestead II.

Travelling at half the speed of light, the crew and passengers are expected to hibernate for 120 years before arriving. That is, until somebody accidentally wakes up 90 years early.

Is there anything remotely realistic about this spaceship? Space.com posed that question to several space travel experts, as well as Guy Hendrix Dyas, the film's production designer. Dyas looked at the history of movie spaceships (including the vehicles from the "Star Trek" and "Star Wars" universes) in his quest to come up with something unique for the new film.

The Avalon has three long, thin modules that wrap around a common center and spin (sort of like stripes on a barbershop pole). Dyas said he based that design on sycamore seeds. It appears that the spin also provides the ship with artificial gravity, similar to fictional ships in the movies "Interstellar" and "2001: A Space Odyssey." The ship is powered by eight nuclear fusion reactors, Dyas said, and can run autonomously, healing most systems even with the crew asleep (as seen in the film).

The ship's immense structure is about 1 kilometer (0.62 miles) in length, and Dyas said he imagines that it was assembled in space over decades. The film takes place at an indeterminate point in the future, Dyas said, but he assumed that by the time the ship was being built, humans would have the ability to mine some of the materials from nearby asteroids or the moon to save on transportation costs.

"My approach to the [ship] design was that I tried to go about it as though I was a cruise liner ship designer," Dyas told Space.com. "I wanted to put myself in the shoes of somebody who had been designing a craft that had a portion of it dedicated to entertainment, and of course that led to the array of colors and textual changes in the ship."

This approach led Dyas to design the more functional areas (such as the mess hall) in stainless steel, while a classy passenger pub was decorated in rich oranges, golds and reds, for example.

Banks of hibernation pods occupy huge halls in the ship. The crew slumbers in separate quarters, inaccessible to the passengers. The pods are clustered into small groups, perhaps (Dyas suggests) so that if one group's cluster fails, at least the other 5,000 passengers are theoretically unaffected.

The hibernation procedure is not really described in the film, but what's clear to moviegoers is what happens afterward: passengers are soothed by a holographic figure explaining where they are. They are escorted to an elevator, then guided to their individual cabin, where they can relax for the last four months of the journey.

In between resting in their quarters, passengers can also get to know the rest of the 5,000 people in common areas, such as the mess hall, the grand concourse, the pool or the bar.

While "Passengers" shows people placed in a hibernation state for decades at a time, that kind of technology does not exist today. There are situations, however, where patients can be put into induced comas with cooled saline solutions for a few days to allow traumatic injuries to heal.

In 2015, a company called SpaceWorks received a NASA Innovative Advanced Concepts grant to investigate the possibility of extending the timeframe of an induced stasis in humans even further than what is currently possible. Aerospace engineer John Bradford, the company's COO, told Space.com that induced stasis should be possible given that some mammals can hibernate for months. (NIAC grants are for early-stage work in far-off technologies.)

"We're not trying to extend the human lifetime," Bradford said, so the technology that SpaceWorks is pursuing is different from what is shown in "Passengers." But in other respects, the movie shows essentially the same thing his company strives for.

"We're trying to put people in a small container to minimize the mass and power requirements, and the consumables [during spaceflight]," he added, saying that during a long Mars journey of perhaps six months, putting astronauts into stasis would cut down on the amount of food required for the mission, not to mention the possibility of crew boredom.

And what about exercise? Bradford said it would be possible to keep up an astronaut's muscle mass using neuromuscular electric stimulation; there have been some positive results in comatose patients using that technique, he said.

Bradford said he had been lucky enough to see "Passengers" before its release, and that he was really pleased to see an emphasis on hibernation, and what happens in the moments after waking up, when the passengers are disoriented and extremely tired (since hibernation or stasis is not the same as sleep).

"That part of the storyline is usually jumped over," he said.

Nuclear fusion is a possible source of propulsion for interstellar ships, but the problem is the size of the reactors that would need to be assembled in space, or launched there, according to some scientists we talked to. So other methods are being considered to get spacecraft going at interstellar speeds.

One idea under consideration by Philip Lubin, a physics professor at the University of California, Santa Barbara, uses lasers. Under another NIAC grant, he is developing a concept known as Directed Energy Propulsion for Interstellar Exploration, which would generate propulsion from laser photons reflected in a mirror. The long-term goal is to create a spacecraft that can, like in "Passengers," move at a significant fraction of the speed of light.

Antimatter engines are another possibility for fueling interstellar ships, said Andreas Tziolas, the co-founder and president of Icarus Interstellar. Antimatter particles are naturally occurring particles that are "opposites" to regular matter particles — so the positron is the antimatter equivalent to the electron; the particles have the same mass but are different in other ways, including electric charge (the electron is negative, the positron is positive). When matter and antimatter collide, they annihilate, and release energy.

"The energy [an antimatter engine] generates is very pure in that it generates a lot of photons when matter reacts with antimatter," he told Space.com. "All of the matter is annihilated and it turns into pure photonic energy. However, the photons themselves are hard to capture."

Though it's not stated directly it the film, it's possible the "Passengers" ship is being fueled by the interstellar medium — the tenuous collection of hydrogen particles that populate much of the universe. This concept was proposed in a 1960 thought experiment by American physicist Robert Bussard, who argued it would allow a ship to travel without having to haul fuel along for the ride.

But there's a problem with that idea, according to Geoffrey Landis, a science fiction author and NASA physicist. Since 1960, scientists have discovered that the medium is too sparse to allow fusion to happen, Landis said.

"The idea was, if you don't carry your fuel with you, you might be able to avoid having a simply enormous fuel tank," he said. But with that theory debunked, the problem remains about how to get to such an incredible speed while still hauling fuel with you. [Does Humanity's Destiny Lie in Interstellar Space Travel? (Op-Ed)]

From a practical standpoint, Landis also agreed that a ship that size would likely have to be built largely in space, and that will probably require asteroid mining.

While asteroid mining is still in the future, there are a couple of companies that are getting started on prospecting. Both Deep Space Industries and Planetary Resources have plans to scout out nearby asteroids to learn about their composition, and the possibilities for getting spacecraft out there. Asteroid-mining technology is in an early stage, but both companies are generating other products (such as Earth observation) that have received some support from customers.

Building a business case would take some time, but Landis said it would be very possible to create a spacecraft from extraterrestrial resources.

"In the long term, if we're going to build these enormous habitats, we are going to have to build them from material in space," he said. "That's a very feasible idea. There's literally millions of asteroids out there from which we could harvest materials without having to drag it out of the gravity well of the Earth."

Ship design

Landis also seemed to think that the Avalon creates gravity by rotating.

"I'm getting a little tired of artificial gravity in 'Star Trek' and 'Star Wars,'" he said, referring to the ability of the ships in these long-standing franchises to generate gravity by more theoretical means.

Experts interviewed for the story agreed that, in general, the ship also appears to take into account human factors, which means designing an environment so that it can best accommodate how humans operate.

An example is how the environment is decorated. Even on the International Space Station, the sterile gray interior is populated with pictures, signs and other mementoes from past crews. Individual astronauts can decorate their quarters to their liking, so that they have family pictures to look at during their six-month missions. So the décor choices that Dyas made are important in real-world spaceflight as well.

Looking at previews for the movie, Tziolas said he thinks the Starship Avalon is similar to the concept that Icarus Interstellar has proposed for an interstellar spaceship. Called Project Hyperion, this craft also has cruise ship-like amenities, room for 5,000 passengers and a spinning design for artificial gravity.

Tziolas added that he is so pleased that Hollywood is getting more realistic with its ship designs in general.

So could the ship from Passengers really exist? Our experts seemed to agree that there are some aspects that reflect real-world science, but some key questions remain about how such a massive vessel would make an interstellar trek.
(FULL STORY)

Shutting a new door on locality
[12/20/2016]
The classic mystery of quantum mechanics concerns the passage of a particle through two slits simultaneously. We know from our understanding of the double-slit experiment that the photon must have traveled through both slits, yet we can never actually observe the photon passing through both slits at the same time. If we find the photon in one place, its probability amplitude to be anywhere else immediately vanishes.

At least, that is the familiar story. In a new experiment described in Scientific Reports, Ryo Okamoto and Shigeki Takeuchi of Kyoto University in Japan have shown that under the right conditions, a single photon can have observable physical effects in two places at once.

Understanding the interaction of light passing through two slits seemed much simpler back when Thomas Young made this sketch (of water wave interference) in the early 1800s.
Understanding the interaction of light passing through two slits seemed much simpler back when Thomas Young made this sketch (of water wave interference) in the early 1800s.
The power of postselection

The heightened sort of nonlocality demonstrated in Kyoto occurs only in the presence of postselection. Consider that identically prepared photons impinging on, for instance, a double slit will land at different points on the screen some time later. The question posed by Yakir Aharonov, Peter Bergmann, and Joel Lebowitz (ABL) in 1964, and revisited in even more dramatic form by Aharonov, David Albert, and Lev Vaidman (AAV) in 1988, was the following: If we look only at the subensemble of photons that land at some particular point on the screen, does that observation give us new information about those photons? Might it allow us to draw more conclusions about what the photons were doing between the time they were prepared and the time we selected a particular region on the screen?

The suggestion that we can chart the progress of photons on their journey may sound heretical at first. The standard lore is that a wave function is a complete description of the quantum state, and that the uncertainty principle prevents one from knowing both where a particle came from and where it will go. Yet ABL and AAV showed rigorously—using only the standard rules of quantum mechanics—that if some measuring apparatus had been interacting with the photons, its final state (the “pointer position” on the meter, as measurement theorists put it) would depend on which subset of photons had been considered. (See the article by Aharonov, Sandu Popescu, and Jeff Tollaksen, Physics Today, November 2010, page 27.)

In the ABL case, the interaction with the meter can be thought of as collapsing the photons into one state or another. Depending on which measurement outcome occurs, the photons may subsequently be more likely to reach one point or another on the screen: The pointer position becomes correlated with the final position of the photons. Classical statistics is sufficient for working out what the average pointer position should be, conditioned on a particular final state. (AAV extended this work to consider postselected “weak” measurements, a scenario that yields purely quantum results one could not obtain classically. Weak measurements raise even more thorny questions about the foundations of quantum mechanics that go beyond the scope of this article.)

Many experiments have confirmed these predictions about the outcomes of conditional measurements. Experimenters have applied postselection to studying interpretational issues in quantum theory and to the practical purpose of amplifying small effects to improve measurement sensitivity. Postselection has also become an important element in the toolbox of quantum information, with applications in linear-optical approaches to quantum computing and the closely related measurement-based quantum computing.

Head-scratching consequences

Although the formulas put forward by ABL and AAV are unavoidable consequences of quantum theory, they cry out for interpretation—especially since they expose new counterintuitive effects. Suppose an experimenter prepares a photon in a symmetric superposition of three states, |ψi⟩ = (|A⟩ + |B⟩ + |C⟩) / √3, but later finds the particle in a different superposition of those three states, |ψf⟩ = (|A⟩ + |B⟩ – |C⟩) / √3. What can the experimenter say about where the particle was between preparation and postselection? In 1991 Aharonov and Vaidman showed that a measurement of the particle number in state A would yield a value of 1 whenever the postselection succeeded; yet by symmetry, so would a measurement of the particle number in state B. In other words, the postselected particle would be certain to influence a measurement apparatus looking for it at A, and equally certain to influence a measurement apparatus looking for it at B. It’s akin to saying that the conditional probability is 100% to be at A and 100% to be at B.

Obviously, that situation never arises classically. But in the quantum world, every measurement disturbs the system, and the two conditional probabilities correspond to different physical situations: one in which a measurement interaction occurred at A, and one in which it occurred at B. Suppose the experimenter looks for the particle at A but doesn’t find it. That projects the original state onto |B⟩ + |C⟩ / √2, which is orthogonal to |ψf⟩. Therefore, if the experimenter does not find the particle at A, the postselection never succeeds. It follows trivially that whenever the postselection does succeed, the particle must have been found at A. My group carried out a verification of that prediction in 2004.

If the results of that quantum shell game weren’t baffling enough, Aharonov and Vaidman proposed an even stranger thought experiment in 2003. In “How one shutter can close N slits,” they imagined a series of two or more slits (let us consider the simplest example, N=2, although the argument applies for any N). They then considered an experimenter who possessed a single shutter that could block one of the slits. Instead of placing the shutter to block one slit or the other, the experimenter prepares the shutter in a superposition of both positions, along with a third position that does not block a slit.

Aharonov and Vaidman showed that if the experimenter postselected in a different, properly chosen superposition (the |ψf⟩ of the original three-box problem), then any measurement of whether the shutter was blocking a particular one of the two slits would be guaranteed to yield an affirmative answer. If a single photon was sent toward slit A, the shutter would block its path. If the photon was sent toward slit B, the same shutter would still be guaranteed to block it. Most remarkably, the photon would also be stopped by the shutter if it was sent along any coherent superposition of the paths leading to the N slits. In other words, the experimenter can block the photon whether it goes through slit A or slit B—and also block the photon if it doesn't go through one particular slit or the other, with no need to determine which slit it hit. In this very real sense, the shutter acts as though it is in two places at once.

In a recent head-scratching experiment, researchers managed to get one shutter to simultaneously blockade two slits. Credit: Shigeki Takeuchi
In a recent head-scratching experiment, researchers managed to get one shutter to simultaneously blockade two slits. Credit: Shigeki Takeuchi
One might be tempted to cry foul. “The shutter is affected by the choice to look for it in one place or another,” that person might say. “It wouldn’t be able to block two photons arriving at both slits.” Yet in a weak-measurement version of the proposal, in which the disturbance due to measurements is reduced to near zero, the predictions hold: The unperturbed shutter acts as though it is fully at A and also fully at B.

Building a quantum shutter

Of course, macroscopic shutters can no more easily be placed in superpositions than can macroscopic felines. Enter Okamoto and Takeuchi. Using ideas from linear-optical quantum computing, they replaced the slits and shutters with quantum routers—logic gates in which the presence of one photon (let’s call it a shutter photon) determines whether another photon (the probe photon) is transmitted or reflected.

The researchers prepared a shutter photon, which was generated via spontaneous parametric down-conversion, in a superposition of three paths. If the shutter photon took path 1, it should block a probe photon reaching router 1; if it took path 2, it should block a probe photon reaching router 2; and if it took path 3, it wouldn't block any photons at all. The three paths were then recombined with a phase shift, so that the firing of a certain detector signaled a measurement of the shutter photon in state |ψf⟩. Looking only at cases when the detector fired, the researchers could study the behavior of a postselected photon and see which slit or slits it blocked.

Lo and behold, when the postselection succeeded, Okamoto and Takeuchi found that the probe photons had a high probability of being reflected, regardless of which router they were sent toward. Because of subtle technical issues involved in the concatenation of the linear-optical quantum gates, the probability was limited to a maximum of 67%. The researchers observed about 61% experimentally, clearly exceeding the 50% threshold that could be achieved by a shutter constrained to be in one place at a time. Furthermore, the scientists confirmed the remarkable prediction that the same shutter was capable of blocking probe photons prepared in arbitrary superpositions of the two paths. The experiment demonstrates that any “shutter” (in this case a shutter photon) that is prepared in a given initial state and then measured in the appropriate final state must have been in front of multiple slits at the same time.

So, what does this experiment teach us about nonlocality? It’s hard to find a more down-to-earth approach to that question than my mother’s insightful proposal: “I hope this means I can shop in two different places at the same time.” Well, quantum mechanics may offer us fascinating new phenomena, but they always come at a price. In the case of quantum shopping, the postselection step means that my mother may get no shopping done at all. However, she may be in luck if she happens to know that only one of her N favorite stores still has the gift she wants, but she doesn’t know which store. An application of Okamoto and Takeuchi's result would mean that in the time it takes her to visit just a single store, she would have a finite probability of being certain to find the gift.

I’ll leave the calculation of the exact value of that probability as an exercise for holiday shoppers. But from the perspective of research into the foundations of quantum mechanics, we now have experimental confirmation that postselected systems exhibit a form of nonlocality even more striking than the ones we were familiar with before.

Aephraim Steinberg is a professor of physics at the University of Toronto, where he is a founding member of the Centre for Quantum Information and Quantum Control and a fellow of the Canadian Institute for Advanced Research.
(FULL STORY)

Unexpected interaction between dark matter and ordinary matter in mini-spiral galaxies
[12/15/2016]
Statistical analysis of mini-spiral galaxies shows an unexpected interaction between dark matter and ordinary matter. According to the SISSA study recently published in Monthly Notices of the Royal Astronomical Society, where the relationship is obvious and cannot be explained in a trivial way within the context of the Standard Model, these objects may serve as "portals" to a completely new form of Physics which can explain phenomena like matter and dark energy.

They resemble a spiral galaxy like ours, only ten thousand times smaller: the mini-spiral galaxies studied by Professor Paolo Salucci of the International School for Advanced Studies (SISSA) in Trieste, and Ekaterina Karukes, who recently earned her PhD at SISSA, may prove to be "the portal that leads us to a whole new Physics, going beyond the standard model of particles to explain matter and dark energy," says Salucci. It is the first time these elements have been studied statistically, a method that can erase the "individual" variability of each object, thus revealing the general characteristics of the class. "We studied 36 galaxies, which was a sufficient number for statistical study. By doing this, we found a link between the structure of ordinary, or luminous matter like stars, dust and gas, with dark matter."
Dark matter is one of the great mysteries of Physics: since it does not emit electromagnetic radiation we cannot see it, even with the most sophisticated instruments. It was only discovered through its gravitational effects. Many believe it makes up 90% of our Universe. "Most dark matter, according to the most credible hypotheses, would be non-baryonic or WIMP. It would not interact with ordinary matter except through gravitational force," continues Karukes. "Our observations, however, disagree with this notion."
Salucci and Karukes showed that, in the objects they observed, the structure of dark matter mimics visible matter in its own way. "If, for a given mass, the luminous matter in a galaxy is closely compacted, so it is the dark matter. Similarly, if the former is more widespread than in other galaxies, so is the latter."
The "tip of the iceberg"
"It is a very strong effect that cannot be explained trivially using the Standard Model of particles." The Standard Model is the most widely-accepted theory of Physics in the scientific community. It explains fundamental forces (and particles of matter), however it contains some doubtful points, most notably the fact that it does not include gravitational force. Phenomena such as the existence of dark matter and dark energy make it clear to scientists that there is another sort of physics yet to be discovered and explored.
"From our observations, the phenomenon, and thus the necessity, is incredibly obvious. At the same time, this can be a starting point for exploring this new kind of physics," continues Salucci. "Even in the largest spiral galaxies we find effects similar to the ones we observed, but they are signals that we can try to explain using the framework of the Standard Model through astrophysical processes within galaxies. With mini-spirals, however, there is no simple explanation. These 36 items are the tip of the iceberg of a phenomenon that we will probably find everywhere and that will help us discover what we cannot yet see. "
Explore further: NA64 hunts the mysterious dark photon
More information: E.V. Karukes et al. The universal rotation curve of dwarf disk galaxies, Monthly Notices of the Royal Astronomical Society (2016). DOI: 10.1093/mnras/stw3055


Read more at: http://phys.org/news/2016-12-unexpected-interaction-dark-ordinary-mini-spiral.html#jCp
(FULL STORY)

Thermodynamics constrains interpretations of quantum mechanics
[12/16/2016]
John Stewart Bell’s famous theorem is a statement about the nature of any theory whose predictions are compatible with those of quantum mechanics: If the theory is governed by hidden variables, unknown parameters that determine the results of measurements, it must also admit action at a distance. Now an international collaboration led by Adán Cabello has invoked a fundamental thermodynamics result, the Landauer erasure principle, to show that systems in hidden-variable theories must have an infinite memory to be compatible with quantum mechanics.

In quantum mechanics, measurements made at an experimenter’s whim cause a system to change its state; for a two-state electron system, for example, that change can be from spin up in the z-direction to spin down in the x-direction. Because of those changes, a system with hidden variables has to have a memory so that it knows how to respond to a series of measurements; if that memory is finite, it can serve only for a limited time. As an experimenter keeps making observations, the system must eventually update its memory, and according to the Landauer principle, the erasure of information associated with that update generates heat. (See the article by Eric Lutz and Sergio Ciliberto, Physics Today, September 2015, page 30.) In the electron example, if all spin measurements must be made along the x– or z-axis, each measurement dissipates a minimum amount of heat roughly equal to Boltzmann’s constant times the temperature. Cabello and colleagues show, however, that if an experimenter is free to make spin measurements anywhere in the xz-plane, the heat generated per measurement is unbounded—obviously, an unphysical result.

Heat need not be produced in a hidden-variables theory if a system could store unlimited information. Such is the case, for example, for David Bohm’s version of quantum mechanics, in which a continuous pilot wave serves as the information repository. And in formulations of quantum mechanics without hidden variables, such as in the Copenhagen interpretation, heat is not generated because there is no deterministic register to update. (A. Cabello et al., Phys. Rev. A 94, 052127, 2016.)
(FULL STORY)

Billions of Stars and Galaxies to Be Discovered in the Largest Cosmic Map Ever
[12/20/2016]
The Pan-STARRS telescope in Hawaii spent four years scanning the skies to produce two petabytes of publicly-available data. Now it's up to us to study it.

Need precision observations of a nearby star? Want to measure the light-years to a distant galaxy? Or do you just want to stare into the deep unknown and discover something no one has ever seen before? No problem! The Panoramic Survey Telescope & Rapid Response System (Pan-STARRS) has got you covered after releasing the biggest digital sky survey ever carried out to the world.

"The Pan-STARRS1 Surveys allow anyone to access millions of images and use the database and catalogs containing precision measurements of billions of stars and galaxies," said Ken Chambers, Director of the Pan-STARRS Observatories, in a statement. "Pan-STARRS has made discoveries from Near Earth Objects and Kuiper Belt Objects in the Solar System to lonely planets between the stars; it has mapped the dust in three dimensions in our galaxy and found new streams of stars; and it has found new kinds of exploding stars and distant quasars in the early universe."

RELATED: Vast Map Charts Our 2 Billion Light-Year Wide Cosmic 'Hood

The Pan-STARRS project is managed by the University of Hawaii's Institute for Astronomy (IfA) and the vast database is now available by the Space Telescope Science Institute (STScI) in Baltimore, Md. To say this survey is "big" is actually a disservice to just how gargantuan a data management task it is. According the IfA, the entire survey takes up two petabytes of data, which, as the university playfully puts it, "is equivalent to one billion selfies, or one hundred times the total content of Wikipedia."
(FULL STORY)

Scientists Measure Antimatter for the First Time
[12/19/2016]
Using a laser to excite the antiparticles (positrons and antiprotons), scientists can begin to measure the atomic structure of some of the most mysterious material in the universe.
Antimatter isn't the absurd theoretical substance it sounds like—it's just material composed of particles that have the same mass as conventional particles, but opposite charges. An electron with a positive charge is called a positron, and a proton with a negative charge is called an antiproton. Material composed of these antiparticles is called antimatter.

In a groundbreaking experiment published today in Nature, physicists at CERN, the European Organization for Nuclear Research's particle physics lab outside Geneva, have measured the energy levels of antihydrogen for the first time. Hydrogen is the simplest element on the periodic table, with just one electron and one proton, so making antihydrogen is easier than any other type of antimatter.

It's incredibly difficult to create antimatter and retain it for any extended period of time. This is because antimatter and matter annihilate each other when they come into contact. An electron and positron will zap each other out of existence, releasing energy in the form of light, and protons and antiprotons do the same. Considering conventional matter is floating all around our world, it can be quite difficult to keep antimatter from coming into contact with it.

"What you hear about in science fiction—that antimatter gets annihilated by normal matter—is 100 percent true," Jeffrey Hangst, a physicist at Denmark's Aarhus University who founded the ALPHA group at CERN to study antimatter, told NPR. "[It] is the greatest challenge in my everyday life.
To create antihydrogen, ALPHA physicists combined positrons and antiprotons in a vacuum tube and used extremely powerful magnetic fields to keep the resulting antihydrogen from colliding with the walls of the container. Using this technique, the team has successfully maintained antihydrogen atoms for about 15 minutes.

Electrons orbit the nucleus of conventional atoms at different energy levels, and when they move from one energy level to another, they let off energy that can be measured as light on the electromagnetic spectrum. The same is true for antihydrogen atoms.

The ALPHA researchers used a laser to excite the antihydrogen so the positrons would jump from a lower energy level to a higher one. Then, when the positrons return to the lower energy level, scientists can measure the light released. What they found is that positrons in antihydrogen move from one energy level to another in the same way that electrons do in normal hydrogen.

This seemingly benign finding has actually captured the attention of theoretical physicists around the world because it is central to one of the great mysteries of the universe: Why does anything exist at all? Models of the Big Bang suggest that an equal amount of matter and antimatter should have been created. As far as we can tell, all of the antimatter and matter of the universe should have been annihilated shortly after it was created. But here we are.

"Something happened," Hangst told NPR, "some small asymmetry that led some of the matter to survive, and we simply have no good idea that explains that right now."

It could be that antimatter doesn't obey the same laws of physics as matter. It could be that our models of the Big Bang are flawed. But whatever the true answer to our existence is, experiments with antimatter like this one could unlock the secrets. Our first experiment suggests that antihydrogen behaves just like boring old hydrogen—but this is merely the beginning of a new kind of scientific study.
(FULL STORY)

Europe's Bold Plan for a Moon Base Is Coming Together
[12/20/2016]
Imagine an international research station on the moon, where astronauts and cosmonauts and taikonauts and any other-nauts from around the world conduct science experiments, gather resources, build infrastructure, study our home planet from afar, and erect a new radio telescope to probe the mysteries of the ancient cosmos. This is the vision of Jan Woerner, the German civil engineer who serves as the Director General of the European Space Agency. He calls it "Moon Village."
Moon Village isn't so much a literal village as it is a vision of worldwide cooperation in space. It is part of Woerner's larger concept of "Space 4.0."

Woerner, you see, breaks down the history of space exploration into four periods. All of ancient and classical astronomy is lumped into Space 1.0, the space race from Sputnik to Apollo is Space 2.0, and the establishment of the International Space Station defines the period of Space 3.0. As the largest space station—which holds the record for longest continuous human habitation, 16 years and counting—the ISS soars as a shining example of successful, longterm, peacetime international cooperation like no other program in the history of humankind.
Space 4.0 is a continuation of that spirit of global cooperation, and it represents the entry of private companies, academic institutions, and individual citizens into the exploration of the cosmos. Moon Village, part of Space 4.0, is a worldwide community of people who share the dream of becoming an interplanetary species.

"Somebody was asking me, 'When do you do it, and how much money do you need?' I said it's already progressing, as a village on Earth. The village starts with the first actor, and we have several actors right now, so it's already on its way," Woerner said to the Space Transportation Association (STA) at a Capital Hill luncheon on December 9, as reported by Aviation Week.
Of course, all this sentiment is nice, but where are we in terms of building a physical moon base? Closer than you might think.

The rest of Europe has united behind Woerner's idea, as the science ministers of each ESA member state have endorsed Space 4.0. To that end, the European Space Agency is developing a Lunar Lander, its first. The program was postponed in 2012 because Germany, which is covering 45 percent of the costs, couldn't convince the other member nations to put up the additional 55 percent. Renewed interest in lunar exploration with a German at the helm of ESA could be enough to jumpstart the program again.

Meanwhile, ESA is investing in technologies to develop 3D printing methods that would work using lunar soil. The research could pave the way for constructing tools and even habitats on the moon. The British architecture firm Foster + Partners has gone so far as to design a catenary dome with a cellular structure that could guard an inflatable lunar habitat against both small pieces of debris and space radiation.

Other nations have their eyes set on the moon as well. India and Japan both have lunar rovers under development that they plan to launch before 2020. China has two sample return missions in the works and a plan to land on the far side of the moon for the very first time, all before the decade is out. The space agencies of Europe, Japan, Russia and China have all proposed missions to put astronauts on the moon in the coming decades.
(FULL STORY)

Einstein's Theory Just Put the Brakes on the Sun's Spin
[12/16/2016]
Although the sun is our nearest star, it still hides many secrets. But it seems that one solar conundrum may have been solved and a theory originally proposed in 1905 by Albert Einstein could be at the root of it all.

Twenty years ago, solar astronomers realized that the uppermost layer of the sun rotates slower than the rest of the sun's interior. This is odd. It is well known the sun rotates faster at its equator than at its poles — a phenomenon known as "differential rotation" that drives the sun's 11-year solar cycle — but the fact that the sun has a sluggish upper layer has been hard to understand. It's as if there's some kind of force trying to hold it in place while the lower layers churn below it.

Now, researchers from University of Hawaii Institute for Astronomy (IfA), Brazil, and Stanford University may have stumbled on an answer and it could all be down to fundamental physics. It seems that the light our sun generates has a braking effect on the sun's surface layers.
(FULL STORY)

Dying Star Offers Glimpse of Earth's Doomsday in 5B Years
[12/12/2016]
Five billion years from now, our sun will die. After running out of hydrogen fuel, it will start burning heavier and heavier elements in its fusion core, causing its body to bloat, shedding huge quantities of material into space via violent stellar winds. During this time, our star will expand around 100 times bigger than it is now, becoming what is known as a "red giant." This dramatic expansion will engulf Mercury and Venus, the two closest planets to the sun.

But what is less clear is what will happen to Earth — will our planet go the way of Mercury and Venus and succumb to an ocean of superheated plasma? Or will our planet escape the worst of the sun's death throes to continue orbiting the tiny white dwarf star that will be left behind?

"We already know that our sun will be bigger and brighter [when entering the red giant phase], so that it will probably destroy any form of life on our planet," said Leen Decin, of the KU Leuven Institute of Astronomy, in a statement. "But will the Earth's rocky core survive the red giant phase and continue orbiting the white dwarf?

With the help of the most powerful radio observatory on the planet, astronomers could soon have a clue by looking at a nearby star system that resembles how our solar system will look when the sun begins to die.

RELATED: Enjoy Earth Day While You Can, There Are Only 5 Billion Left

L2 Puppis is an evolved star located over 200 light-years from Earth. Though this seems far away, it's pretty much on our cosmic doorstep and well within the resolving power of the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. Through precise measurements of the star, astronomers have deduced its mass and age, realizing that it is (or was) a sun-like star that's now 10 billion years old. It's also a prime example of a planetary nebula in the making.

Like our sun five billion years in the future, L2 Puppis is ripping itself apart, blasting huge quantities of gas into space. This process creates a massive glowing cloud and this particular planetary nebula resembles a beautiful cosmic butterfly

But that's not all. According to the new study published in the journal Astronomy & Astrophysics, L2 Puppis also appears to have a planet in tow, roughly 300 million kilometers from the star. Though this distance is around twice the distance that Earth orbits the sun, it provides a very privileged view of a world orbiting a dying sun-like star. It's also an ominous preview of what's in store for Earth in a few billion years and the researchers hope to study this unfortunate planet as it experiences the wrath of L2 Puppis."We discovered that L2 Puppis is about 10 billion years old," said Ward Homan, also from KU Leuven. "Five billion years ago, the star was an almost perfect twin of our sun as it is today, with the same mass. One third of this mass was lost during the evolution of the star. The same will happen with our sun in the very distant future."

RELATED: Real Doomsday: Earth Dead in 2.8 Billion Years

"Five billion years from now, the sun will have grown into a red giant star, more than a hundred times larger than its current size," said Decin. "It will also experience an intense mass loss through a very strong stellar wind. The end product of its evolution, 7 billion years from now, will be a tiny white dwarf star. This will be about the size of the Earth, but much heavier: one tea spoon of white dwarf material weighs about 5 tons."

Astronomers often look to the stars to better understand our own place in the galaxy. In this case, they've glimpsed the future and seen a key part of the life cycle of a sun-like star. They've also seen a true doomsday, an event so final that it wrecks our sun, taking the nearest planets with it. And though Earth may or may not be swallowed whole by the swelling stellar inferno, it will be sterilized of life — on our planet's roasted surface at least.
(FULL STORY)

Dark Matter Not So Clumpy After All
[12/7/2016]
Dark matter, the mysteriously invisible substance that makes up about 27 percent of the mass in the universe, may not be as clumpy as scientists previously thought.

In 2013, researchers with Europe's Planck mission, which studied the oldest light in the universe, found that dark matter has lumped together over time through gravitational attraction. What started out as a smooth and even distribution of dark matter slowly formed dense chunks over time.

But new research at the European Southern Observatory's (ESO) Very Large Telescope (VLT) at the Paranal Observatory in Chile suggests that dark matter is not quite as clumpy as the Planck mission previously found.
"This latest result indicates that dark matter in the cosmic web, which accounts for about one-quarter of the content of the universe, is less clumpy than we previously believed," Massimo Viola, a researcher at the Leiden Observatory in the Netherlands who co-led in the study, said in a statement.

To see how dark matter is distributed in universe, the international team of researchers used data from the Kilo Degree Survey (KiDS) at the VLT Survey Telescope. This deep-sky survey looked at about 15 million galaxies in five patches of the southern sky, covering an area as big as 2,200 full moons (or 450 square degrees).

Because dark matter's gravity can bend light — a process called gravitational lensing — the light coming from these 15 million galaxies could reveal information about the structure and distribution of dark matter, the researchers suggest. In this study, they looked for a variation of this phenomenon known as weak gravitational lensing, or cosmic shear.
Weak gravitational lensing is a subtle effect that has to be measured with precision. When large-scale structures like galaxy clusters cause weak gravitational lensing, the light-warping effect is subtler and more difficult to detect than gravitational lensing around smaller objects like stars. But with high-resolution images taken by the VLT Survey Telescope, the researchers were able to detect this subtle effect. This study is the first to use this imaging method on such a large portion of the sky to map the invisible matter in the universe, the authors wrote.

When the researchers then used this data to calculate how clumpy dark matter is, they discovered that it is significantly smoother than the Planck satellite data had previously determined. This means that dark matter may be more evenly distributed than scientists have thought.
How dark matter has spread and clumped together since the Big Bang happened 13.8 billion years ago, can provide insights into the evolution of the universe, according to co-author Hendrik Hildebrandt of the Argelander Institute for Astronomy in Bonn, Germany. "Our findings will help to refine our theoretical models of how the universe has grown from its inception up to the present day," Hildebrandt said in the same statement.

"We see an intriguing discrepancy with Planck cosmology at the moment," co-author Konrad Kuijken of the Leiden Observatory in the Netherlands, who is principal investigator of the KiDS survey, said in the statement. "Future missions such as the Euclid satellite and the Large Synoptic Survey Telescope will allow us to repeat these measurements and better understand what the universe is really telling us."
(FULL STORY)

Scientists Catch "Virtual Particles" Hopping In and Out of Existence
[11/30/2016]
About 400 light-years from here, in the area surrounding a neutron star, the electromagnetic field of this unbelievably dense object appears to be creating an area where matter spontaneously appears and then vanishes.

Quantum electrodynamics (QED) describes the relationships between particles of light, or photons, and electrically charged particles such as electrons and protons. The theories of QED suggest that the universe is full of "virtual particles," which are not really particles at all. They are fluctuations in quantum fields that have most of the same properties as particles, except they appear and vanish all the time. Scientists predicted the existence of virtual particles some 80 years ago, but we have never had experimental evidence of this process until now.

SEEING THE INVISIBLE

How can we possibly see such a thing? One of the properties virtual particles have in common with actual particles is that they both affect light. In addition, intense magnetic fields are thought to excite the activity of virtual particles, affecting any light that passes through that space more dramatically.

So a team of astronomers pointed our most advanced ground-based telescope, the European Southern Observatory's Very Large Telescope (VLT), at one of the densest objects we know of: a neutron star.
Neutron stars have magnetic fields that are billions of times stronger than our sun's. Using the VLT, Roberto Mignani from the Italian National Institute for Astrophysics (INAF) and his team observed visible light around the neutron star RX J1856.5-3754 and detected linear polarization—or the alignment of light waves according to external electromagnetic influences—in the empty space around the star. This is rather odd, because conventional relativity says that light should pass freely through a vacuum, such as space, without being altered. The linear polarization was to such a degree (16 degrees to be precise) that the only known explanations are theories of QED and the influence of virtual particles.

"According to QED, a highly magnetized vacuum behaves as a prism for the propagation of light, an effect known as vacuum birefringence," Mignani says. "The high linear polarization that we measured with the VLT can't be easily explained by our models unless the vacuum birefringence effects predicted by QED are included."

HOW DO YOU MEASURE SOMETHING THAT DOESN'T ALWAYS EXIST?

Vacuum birefringence was first predicted in the 1930s by Werner Heisenberg and Hans Heinrich Euler. It was an exciting time for the development of quantum mechanics, when many of the advanced theories still studied today were developed.

In the quantum realm, matter behaves very strangely to say the least. It violates both Newton's classical laws of physics and Einstein's theories of relativity and gravity. Matter can exist in two separate places at once. Entangled particles, separated by miles, can influence each other instantaneously. As far as we can tell, the smallest building blocks of matter exist with multiple, or even infinite properties, known as quantum states, until they are observed or measured.

Fortunately, we can model and even predict some quantum phenomena, and we do this using wave functions. A wave, such as a sine curve, is represented by an equation that has multiple correct values to make it a true mathematical statement. This same basic principle can be applied to physical models of particles that exist in different locations, or with different properties, or sometimes don't exist at all. When the particles are measured, the wave function collapses, and the matter only exists with one set of properties like you would expect. The researchers were able to measure the virtual particles around a neutron star indirectly, by measuring the light that passes through them.

These concepts are so profound that Einstein and Niels Bohr famously debated, at length, whether the universe even exists as a tangible smattering of matter across the void, or if it is a fluid conglomerate of infinite possible realities until we observe it. The first experimental evidence of vacuum birefringence—absurdly strong electromagnetic forces tugging at the very foundations of matter—reminds us that this is still an open-ended question.
(FULL STORY)

New theory of gravity might explain dark matter
[11/8/2016]
A new theory of gravity might explain the curious motions of stars in galaxies. Emergent gravity, as the new theory is called, predicts the exact same deviation of motions that is usually explained by invoking dark matter. Prof. Erik Verlinde, renowned expert in string theory at the University of Amsterdam and the Delta Institute for Theoretical Physics, published a new research paper today in which he expands his groundbreaking views on the nature of gravity.

In 2010, Erik Verlinde surprised the world with a completely new theory of gravity. According to Verlinde, gravity is not a fundamental force of nature, but an emergent phenomenon. In the same way that temperature arises from the movement of microscopic particles, gravity emerges from the changes of fundamental bits of information, stored in the very structure of spacetime.

Newton's law from information

In his 2010 article (On the origin of gravity and the laws of Newton), Verlinde showed how Newton's famous second law, which describes how apples fall from trees and satellites stay in orbit, can be derived from these underlying microscopic building blocks. Extending his previous work and work done by others, Verlinde now shows how to understand the curious behaviour of stars in galaxies without adding the puzzling dark matter.

The outer regions of galaxies, like our own Milky Way, rotate much faster around the centre than can be accounted for by the quantity of ordinary matter like stars, planets and interstellar gasses. Something else has to produce the required amount of gravitational force, so physicists proposed the existence of dark matter. Dark matter seems to dominate our universe, comprising more than 80 percent of all matter. Hitherto, the alleged dark matter particles have never been observed, despite many efforts to detect them.

No need for dark matter

According to Erik Verlinde, there is no need to add a mysterious dark matter particle to the theory. In a new paper, which appeared today on the ArXiv preprint server, Verlinde shows how his theory of gravity accurately predicts the velocities by which the stars rotate around the center of the Milky Way, as well as the motion of stars inside other galaxies.

"We have evidence that this new view of gravity actually agrees with the observations, " says Verlinde. "At large scales, it seems, gravity just doesn't behave the way Einstein's theory predicts."

At first glance, Verlinde's theory presents features similar to modified theories of gravity like MOND (modified Newtonian Dynamics, Mordehai Milgrom (1983)). However, where MOND tunes the theory to match the observations, Verlinde's theory starts from first principles. "A totally different starting point," according to Verlinde.

Adapting the holographic principle

One of the ingredients in Verlinde's theory is an adaptation of the holographic principle, introduced by his tutor Gerard 't Hooft (Nobel Prize 1999, Utrecht University) and Leonard Susskind (Stanford University). According to the holographic principle, all the information in the entire universe can be described on a giant imaginary sphere around it. Verlinde now shows that this idea is not quite correct—part of the information in our universe is contained in space itself.

This extra information is required to describe that other dark component of the universe: Dark energy, which is believed to be responsible for the accelerated expansion of the universe. Investigating the effects of this additional information on ordinary matter, Verlinde comes to a stunning conclusion. Whereas ordinary gravity can be encoded using the information on the imaginary sphere around the universe, as he showed in his 2010 work, the result of the additional information in the bulk of space is a force that nicely matches that attributed to dark matter.

On the brink of a scientific revolution

Gravity is in dire need of new approaches like the one by Verlinde, since it doesn't combine well with quantum physics. Both theories, crown jewels of 20th century physics, cannot be true at the same time. The problems arise in extreme conditions: near black holes, or during the Big Bang. Verlinde says, "Many theoretical physicists like me are working on a revision of the theory, and some major advancements have been made. We might be standing on the brink of a new scientific revolution that will radically change our views on the very nature of space, time and gravity."
(FULL STORY)

Supersolids produced in exotic state of quantum matter
[11/7/2016]
A mind-bogglingly strange state of matter may have finally made its appearance. Two teams of scientists report the creation of supersolids, which are both liquid and solid at the same time. Supersolids have a crystalline structure like a solid, but can simultaneously flow like a superfluid, a liquid that flows without friction.

Research teams from MIT and ETH Zurich both produced supersolids in an exotic form of matter known as a Bose-Einstein condensate. Reports of the work were published online at arXiv.org on October 26 (by the MIT group) and September 28 (by the Zurich group).

Bose-Einstein condensates are created when a group of atoms, chilled to near absolute zero, huddle up into the same quantum state and begin behaving like a single entity. The scientists’ trick for creating a supersolid was to nudge the condensate, which is already a superfluid, into simultaneously behaving like a solid. To do so, the MIT and Zurich teams created regular density variations in the atoms — like the repeating crystal structure of a more typical solid — in the system. That density variation stays put, even though the fluid can still flow.

The new results may be the first supersolids ever created — at least by some definitions. “It’s certainly the first case where you can unambiguously look at a system and say this is both a superfluid and a solid,” says Sarang Gopalakrishnan of the College of Staten Island of the City University of New York. But the systems are far from what physicists predicted when they first dreamt up the strange materials.

Scientists originally expected supersolids to appear in helium-4 — an isotope of the element helium and the same gas that fills balloons at children’s birthday parties. Helium-4 can be chilled and pressurized to produce a superfluid or a solid. Supersolid helium would have been a mixture of these two states.

Previous claims of detecting supersolid helium-4, however, didn’t hold up to scrutiny (SN Online: 10/12/2012). So, says Nikolay Prokof’ev of the University of Massachusetts Amherst, “now we have to go to the artificial quantum matter.” Unlike helium-4, Bose-Einstein condensates can be precisely controlled with lasers, and tuned to behave as scientists wish.

The two groups of scientists formed their supersolids in different ways. By zapping their condensate with lasers, the MIT group induced an interaction that gave some of the atoms a shove. This motion caused an interference between the pushed and the motionless atoms that’s similar to the complex patterns of ripples that can occur when waves of water meet. As a result, zebralike stripes — alternating high- and low-density regions — formed in the material, indicating that it was a solid.

Applying a different method, the ETH Zurich team used two optical cavities — sets of mirrors between which light bounces back and forth repeatedly. The light waves inside the cavities caused atoms to interact and thereby arrange themselves into a crystalline pattern, with atoms separated by an integer number of wavelengths of light.

Authors of the two studies declined to comment on the research, as the papers have been submitted to embargoed journals.

“Experimentally, of course, these are absolutely fantastic achievements,” says Anatoly Kuklov of the College of Staten Island. But, he notes, the particles in the supersolid Bose-Einstein condensates do not interact as strongly as particles would in supersolid helium-4. The idea of a supersolid is so strange because superfluid and solid states compete, and in most materials atoms are forced to choose one or the other. But in Bose-Einstein condensates these two states can more easily live together in harmony, making the weird materials less counterintuitive than supersolid helium-4 would be.

Additionally, says Prokof’ev, “some people will say ‘OK, well, this does not qualify exactly for supersolid state,’” because the spacing of the density variations was set externally, rather than arising naturally as it would have in helium.

Still, such supersolids are interesting for their status as a strange and new type of material. “These are great works,” says Kuklov. “Wide attention is now being paid to supersolidity.”
(FULL STORY)

You Can 3D Print Your Own Mini Universe
[11/1/2016]
Have you ever wondered what the universe looks like in all of its entirety, or how it would feel to hold the universe in the palm of your hand? Good news: It is now possible to do both of these things — all you need is a 3D printer.

Researchers at the Imperial College London have created the blueprints for 3D printing the universe, and have provided the instructions online so anyone with access to a 3D printer can print their own miniature universe. You can see a video on the science behind the 3D-printed universe here.

The researchers' representation of the universe specifically depicts the cosmic microwave background (CMB), or a glowing light throughout the universe that is thought to be leftover radiation from the Big Bang, when the universe was born about 13.8 billion years ago.
(FULL STORY)

Creating Antimatter Via Lasers?
[9/27/2016]
Russian researchers develop calculations to explain the production and dynamics of positrons in the hole-boring regime of ultrahigh-intensity laser-matter interactions.
Dramatic advances in laser technologies are enabling novel studies to explore laser-matter interactions at ultrahigh intensity. By focusing high-power laser pulses, electric fields (of orders of magnitude greater than found within atoms) are routinely produced and soon may be sufficiently intense to create matter from light.

Now, intriguing calculations from a research team at the Institute of Applied Physics of the Russian Academy of Sciences (IAP RAS), and reported this week in Physics of Plasmas, from AIP Publishing, explain the production and dynamics of electrons and positrons from ultrahigh-intensity laser-matter interactions. In other words: They’ve calculated how to create matter and antimatter via lasers.

Strong electric fields cause electrons to undergo huge radiation losses because a significant amount of their energy is converted into gamma rays -- high-energy photons, which are the particles that make up light. The high-energy photons produced by this process interact with the strong laser field and create electron-positron pairs. As a result, a new state of matter emerges: strongly interacting particles, optical fields, and gamma radiation, whose dynamics are governed by the interplay between classical physics phenomena and quantum processes.

A key concept behind the team’s work is based on the quantum electrodynamics (QED) prediction that “a strong electric field can, generally speaking, ‘boil the vacuum,’ which is full of ‘virtual particles,’ such as electron-positron pairs,” explained Igor Kostyukov of IAP RAS. “The field can convert these types of particles from a virtual state, in which the particles aren’t directly observable, to a real one.”

One impressive manifestation of this type of QED phenomenon is a self-sustained laser-driven QED cascade, which is a grand challenge yet to be observed in a laboratory.

But, what’s a QED cascade?

“Think of it as a chain reaction in which each chain link consists of sequential processes,” Kostyukov said. “It begins with acceleration of electrons and positrons within the laser field. This is followed by emission of high-energy photons by the accelerated electrons and positrons. Then, the decay of high-energy photons produces electron-positron pairs, which go on to new generations of cascade particles. A QED cascade leads to an avalanche-like production of electron-positron high-energy photon plasmas.”

For this work, the researchers explored the interaction of a very intense laser pulse with a foil via numerical simulations.

“We expected to produce a large number of high-energy photons, and that some portion of them would decay and produce electron-positron pairs,” Kostyukov continued. “Our first surprise was that the number of high-energy photons produced by the positrons is much greater than that produced by the electrons of the foil. This led to an exponential -- very sharp -- growth of the number of positrons, which means that if we detect a larger number of positrons in a corresponding experiment we can conclude that most of them are generated in a QED cascade.”

They were also able to observe a distinct structure of the positron distribution in the simulations -- despite some randomness of the processes of photon emission and decay.

“By analyzing the positron motion in the electromagnetic fields in front of the foil analytically, we discovered that some characteristics of the motion regulate positron distribution and led to helical-like structures being observed in the simulations,” he added.

The team’s discoveries are of fundamental importance because the phenomenon they explored can accompany the laser-matter interaction at extreme intensities within a wider range of parameters. “It offers new insights into the properties of these types of interactions,” Kostyukov said. “More practical applications may include the development of advanced ideas for the laser-plasma sources of high-energy photons and positrons whose brilliance significantly exceeds that of the modern sources.”

So far, the researchers have focused on the initial stage of interaction when the electron-positron pairs they produced don’t significantly affect the laser¬¬-target interaction.

“Next, we’re exploring the nonlinear stage when the self-generated electron-positron plasma strongly modifies the interaction,” he said. “And we’ll also try to expand our results to more general configurations of the laser–matter interactions and other regimes of interactions -- taking a wider range of parameters into consideration.”

###

The article, "Production and dynamics of positrons in ultrahigh intensity laser-foil interactions," is authored by I. Yu. Kostyukov and E. N. Nerush. The article will appear in the journal Physics of Plasmas on September 27, 2016 (DOI: 10.1063/1.4962567). After that date, it can be accessed at http://scitation.aip.org/content/aip/journal/pop/23/9/10.1063/1.4962567.
(FULL STORY)

No, Astronomers Haven't Decided Dark Energy Is Nonexistent
[10/26/2016]
This week, a number of media outlets have put out headlines like "The universe is expanding at an accelerating rate, or is it?” and “The Universe Is Expanding But Not At An Accelerating Rate New Research Debunks Nobel Prize Theory.” This excitement is due to a paper just published in Nature’s Scientific Reports called "Marginal evidence for cosmic acceleration from Type Ia supernovae,” by Nielsen, Guffanti and Sarkar.
Once you read the article, however, it’s safe to say there is no need to revise our present understanding of the universe. All the paper does is slightly reduce our certainty in what we know—and then only by discarding most of the cosmological data on which our understanding is based. It also ignores important details in the data it does consider. And even if you leave aside these issues, the headlines are wrong anyway. The study concluded that we’re now only 99.7 percent sure that the universe is accelerating, which is hardly the same as “it’s not accelerating.”
The initial discovery that the universe is expanding at an accelerating rate was made by two teams of astronomers in 1998 using Type Ia Supernovae as cosmic measuring tools. Supernovae—exploding stars—are some of the most powerful blasts in the entire cosmos, roughly equivalent to a billion-billion-billion atomic bombs exploding at once. Type Ia’s are a special kind of supernova in that, unlike other supernovae, they all explode with just about the same luminosity every time likely due to a critical mass limit. This similarity means that the differences in their observed brightness are almost entirely based on how far away they are. This makes them ideal for measuring cosmic distances. Furthermore, these objects are relatively common, and they are so bright that we can see them billions of light years away. This shows us how the universe appeared billions of years ago, which we can compare to how it looks today.These supernovae are often called “standard candles” for their consistency, but they’re more accurately “standardizable candles,” because in practice, their precision and accuracy can be improved still further by accounting for small differences in their explosions by observing how long the explosion takes to unfold and how the color of the supernovae are reddened by dust between them and us. Finding a way to do these corrections robustly was what led to the discovery of the accelerating universe. .
The recent paper that has generated headlines used a catalog of Type Ia supernovae collected by the community (including us) which has been analyzed numerous times before. But the authors used a different method of implementing the corrections—and we believe this undercuts the accuracy of their results. They assume that the mean properties of supernovae from each of the samples used to measure the expansion history are the same, even though they have been shown to be different and past analyses have accounted for these differences. However, even ignoring these differences, the authors still find that there is roughly a 99.7 percent chance that the universe is accelerating—very different from what the headlines suggest.Furthermore, the overwhelming confidence astronomers have that the universe is expanding faster now than it was billions of years ago is based on much more than just supernova measurements. These include tiny fluctuations in the pattern of relic heat after the Big Bang (i.e., the cosmic microwave background) and the modern day imprint of those fluctuations in the distribution of galaxies around us (called baryon acoustic oscillations). The present study also ignores the presence of a substantial amount of matter in the Universe, confirmed numerous times and ways since the 1970’s, further reducing the study confidence. These other data show the universe to be accelerating independently from supernovae. If we combine the other observations with the supernova data, we go from 99.99 percent sure to 99.99999 percent sure. That’s pretty sure!
We now know that dark energy, which is what we believe causes the expansion of the universe to accelerate, makes up 70 percent of the universe, with matter constituting the rest. The nature of dark energy is still one of the largest mysteries of all of astrophysics. But there has been no active debate about whether dark energy exists and none about whether the universe is accelerating since this picture was cemented a decade ago.
There are now many new large surveys, both on the ground and in space, whose top priority over the next two decades is to figure out exactly what this dark energy could be. For now, we have to continue to improve our measurements and question our assumptions. While this recent paper does not disprove any theories, it is still good for everyone to pause for a second and remember how big the questions are that we are asking, how we reached the conclusions we have to date and how seriously we need to test each building block of our understanding.
(FULL STORY)

Behind This Plant's Blue Leaves Lies a Weird Trick of Quantum Mechanics
[10/24/2016]
In the fading twilight on the rainforest floor, a plant's leaves glimmer iridescent blue. And now scientists know why. These exotic blue leaves pull more energy out of dim light than ordinary leaves because of an odd trick of quantum mechanics.

A team of plant scientists led by Heather Whitney of the University of Bristol in the U.K. has just discovered the remarkable origin and purpose of the shiny cobalt leaves on the Malaysian tropical plant Begonia pavonina. The plant owes its glimmer to its peculiar machinery for photosynthesis, the process plants use to turn light into chemical energy. Strangely enough, these blue leaves can squeeze more energy out of the red-green light that reaches the eternally dim rainforest floor. Whitney and her colleagues describe the blue leaves today in the journal Nature Plants.

"It's actually quite brilliant. Plants have to cope with every obstacle that's thrown at them without running away. Here we see evidence of a plant that's actually evolved to physically manipulate the little light it receives," says Whitney, "it's quite amazing, and was an absolutely surprising discovery."
(FULL STORY)

Small entropy changes allow quantum measurements to be nearly reversed
[9/30/2016]
In 1975, Swedish physicist Göran Lindblad developed a theorem that describes the change in entropy that occurs during a quantum measurement. Today, this theorem is a foundational component of quantum information theory, underlying such important concepts as the uncertainty principle, the second law of thermodynamics, and data transmission in quantum communication systems.

Now, 40 years later, physicist Mark M. Wilde, Assistant Professor at Louisiana State University, has improved this theorem in a way that allows for understanding how quantum measurements can be approximately reversed under certain circumstances. The new results allow for understanding how quantum information that has been lost during a measurement can be nearly recovered, which has potential implications for a variety of quantum technologies.
Quantum relative entropy never increases
Most people are familiar with entropy as a measure of disorder and the law that "entropy never decreases"—it either increases or stays the same during a thermodynamic process, according to the second law of thermodynamics. However, here the focus is on "quantum relative entropy," which in some sense is the negative of entropy, so the reverse is true: quantum relative entropy never increases, but instead only decreases or stays the same.
In fact, this was the entropy inequality theorem that Lindblad proved in 1975: that the quantum relative entropy cannot increase after a measurement. In this context, quantum relative entropy is interpreted as a measure of how well one can distinguish between two quantum states, so it's this distinguishability that can never increase. (Wilde describes a proof of Lindblad's result in greater detail in his textbook Quantum Information Theory, published by Cambridge University Press.)
One thing that Lindblad's proof doesn't address, however, is whether it makes any difference if the quantum relative entropy decreases by a little or by a lot after a measurement.
In the new paper, Wilde has shown that, if the quantum relative entropy decreases by only a little, then the quantum measurement (or any other type of so-called "quantum physical evolution") can be approximately reversed.
"When looking at Lindblad's entropy inequality, a natural question is to wonder what we could say if the quantum relative entropy goes down only by a little when the quantum physical evolution is applied," Wilde told Phys.org. "It is quite reasonable to suspect that we might be able to approximately reverse the evolution. This was arguably open since the work of Lindblad in 1975, addressed in an important way by Denes Petz in the late 1980s (for the case in which the quantum relative entropy stays the same under the action of the evolution), and finally formulated as a conjecture around 2008 by Andreas Winter. What my work did was to prove this result as a theorem: if the quantum relative entropy goes down only by a little under a quantum physical evolution, then we can approximately reverse its action."

Wilde's improvements to Lindblad's theorem have a variety of implications, but the main one that Wilde discusses in his paper is how the new results allow for recovering quantum information.
"If the decrease in quantum relative entropy between two quantum states after a quantum physical evolution is relatively small," he said, "then it is possible to perform a recovery operation, such that one can perfectly recover one state while approximately recovering the other. This can be interpreted as quantifying how well one can reverse a quantum physical evolution." So the smaller the relative entropy decrease, the better the reversal process.
The ability to recover quantum information could prove useful for quantum error correction, which aims to protect quantum information from damaging external effects. Wilde plans to address this application more in the future with his colleagues.
As Wilde explained, Lindblad's original theorem can also be used to prove the uncertainty principle of quantum mechanics in terms of entropies, as well as the second law of thermodynamics for quantum systems, so the new results have implications in these areas, as well.
"Lindblad's entropy inequality underlies many limiting statements, in some cases said to be physical laws or principles," Wilde said. "Examples are the uncertainty principle and the second law of thermodynamics. Another example is that this entropy inequality is the core step in determining limitations on how much data we can communicate over quantum communication channels. We could go as far as to say that the above entropy inequality constitutes a fundamental law of quantum information theory, which is a direct mathematical consequence of the postulates of quantum mechanics."
Regarding the uncertainty principle, Wilde and two coauthors, Mario Berta and Stephanie Wehner, discuss this angle in a forthcoming paper. They explain that the uncertainty principle involves quantum measurements, which are a type of quantum physical evolution and therefore subject to Lindblad's theorem. In one formulation of the uncertainty principle, two experiments are performed on different copies of the same quantum state, with both experimental outcomes having some uncertainty.
"The uncertainty principle is the statement that you cannot generally make the uncertainties of both experiments arbitrarily small, i.e., there is generally a limitation," Wilde said. "It is now known that a statement of the uncertainty principle in terms of entropies can be proved by using the 'decrease of quantum relative entropy inequality.' So what the new theorem allows for doing is relating the uncertainties of the measurement outcomes to how well we could try to reverse the action of one of the measurements. That is, there is now a single mathematical inequality which captures all of these notions."
In terms of the second law of thermodynamics, Wilde explains how the new results have implications for reversing thermodynamic processes in both classical and quantum systems.
"The new theorem allows for quantifying how well we can approximately reverse a thermodynamic transition from one state to another without using any energy at all," he said.
He explained that this is possible due to the connection between entropy, energy, and work. According to the second law of thermodynamics, a thermodynamic transition from one quantum state to another is allowed only if the free energy decreases from the original state to the final state. During this process, one can gain work and store energy. This law can be rewritten as a statement involving relative entropies and can be proved as a consequence of the decrease of quantum relative entropy.
"What my new work with Stephanie Wehner and Mischa Woods allows for is a refinement of this statement," Wilde said. "We can say that if the free energy does not go down by very much under a thermodynamic transition (i.e., if there is not too much work gained in the process), then it is possible to go back approximately to the original state from the final state, without investing any work at all. The key word here is that you can go back only approximately, so we are not in violation of the second law, only providing a refinement of it."
In addition to these implications, the new theorem can also be applied to other research topics in quantum information theory, including the Holevo bound, quantum discord, and multipartite information measures.
Wilde's work was funded in part by The DARPA Quiness program (ending now), which focused on quantum key distribution, or using quantum mechanics to ensure secret communication between two parties. He describes more about this application, in particular how Alice and Bob might use a quantum state to share secrets that can be kept private from an eavesdropper Eve (and help them survive being attacked by a bear), in a recent blog post.
(FULL STORY)

Did the Mysterious 'Planet Nine' Tilt the Solar System?
[10/19/2016]
The putative "Planet Nine" may have tilted the entire solar system, researchers say.

In January, astronomers revealed evidence for the potential existence of another planet in the solar system. Researchers suggest that if this world — dubbed Planet Nine — exists, it could be about 10 times Earth's mass and orbit the sun at a distance about 500 times the distance from the Earth to the sun.

Previous research suggested that Planet Nine would possess a highly tilted orbit compared with the relatively thin, flat zone in which the eight official planets circle the sun. This led scientists to investigate whether Planet Nine's slant might help explain other tilting seen elsewhere in the solar system.
Now, researchers suggest that Planet Nine's influence might have tilted the entire solar system except the sun.

"Planet Nine may have tilted the other planets over the lifetime of the solar system," said study lead author Elizabeth Bailey, an astrophysicist and planetary scientist at the California Institute of Technology in Pasadena.

Prior work found that the zone in which the eight major planets orbit the sun is tilted by about 6 degrees compared to the sun's equator. This discrepancy has long been a mystery in astronomy.
Bailey and her colleagues ran computer simulations that suggest that the tilt of the eight official planets can be explained by the gravitational influence of Planet Nine "over the 4.5-billion-years-ish lifetime of the solar system," Bailey told Space.com.

Bailey did note that there are other potential explanations for the tilt of the solar system. One alternative is that electrically charged particles influenced by the young sun's magnetic field could have interacted with the disk of gas and dust that gave rise to the planets in ways that tilted the solar system. Another possibility is that there might have been an imbalance in the mass of the nascent sun's core.

"However, all these other ways to explain why the solar system is tilted are really hard to test — they all invoke processes that were possibly present really early in the solar system," Bailey said. "Planet Nine is the first thing that has been proposed to tilt the solar system that doesn't depend on early conditions, so if we find Planet Nine, we will be able to see if it's the only thing responsible for the tilt, or if anything else may have played a role."

The scientists detailed their findings yesterday (Oct. 18) at a joint meeting of the American Astronomical Society's Division for Planetary Sciences and European Planetary Science Congress in Pasadena, California.
(FULL STORY)

Cosmological mystery solved by largest ever map of voids and superclusters
[10/12/2016]
A team of astrophysicists at the University of Portsmouth have created the largest ever map of voids and superclusters in the universe, which helps solve a long-standing cosmological mystery. The map of the positions of cosmic voids – large empty spaces which contain relatively few galaxies – and superclusters – huge regions with many more galaxies than normal – can be used to measure the effect of dark energy 'stretching' the universe.

The results confirm the predictions of Einstein's theory of gravity.
Lead author Dr Seshadri Nadathur from the University's Institute of Cosmology and Gravitation said: "We used a new technique to make a very precise measurement of the effect that these structures have on photons from the cosmic microwave background (CMB) – light left over from shortly after the Big Bang – passing through them.

"Light from the CMB travels through such voids and superclusters on its way to us. According to Einstein's General Theory of Relativity, the stretching effect of dark energy causes a tiny change in the temperature of CMB light depending on where it came from. Photons of light travelling through voids should appear slightly colder than normal and those arriving from superclusters should appear slightly hotter. "This is known as the integrated Sachs-Wolfe (ISW) effect.

"When this effect was studied by astronomers at the University of Hawai'i in 2008 using an older catalogue of voids and superclusters, the effect seemed to be five times bigger than predicted. This has been puzzling scientists for a long time, so we looked at it again with new data."
To create the map of voids and superclusters, the Portsmouth team used more than three-quarters of a million galaxies identified by the Sloan Digital Sky Survey. This gave them a catalogue of structures more than 300 times bigger than the one previously used.
The scientists then used large computer simulations of the universe to predict the size of the ISW effect. Because the effect is so small, the team had to develop a powerful new statistical technique to be able to measure the CMB data.

They applied this technique to CMB data from the Planck satellite, and were able to make a very precise measurement of the ISW effect of the voids and superclusters. Unlike in the previous work, they found that the new result agreed extremely well with predictions using Einstein's gravity.
Dr Nadathur said: "Our results resolve one long-standing cosmological puzzle, but doing so has deepened the mystery of a very unusual 'Cold Spot' in the CMB.
"It has been suggested that the Cold Spot could be due to the ISW effect of a gigantic 'supervoid' which has been seen in that region of the sky. But if Einstein's gravity is correct, the supervoid isn't big enough to explain the Cold Spot.
"It was thought that there was some exotic gravitational effect contradicting Einstein which would simultaneously explain both the Cold Spot and the unusual ISW results from Hawai'i. But this possibility has been set aside by our new measurement – and so the Cold Spot mystery remains unexplained."
(FULL STORY)

The Universe Has 10 Times More Galaxies Than Scientists Thought
[10/13/2016]
More than a trillion galaxies are lurking in the depths of space, a new census of galaxies in the observable universe has found — 10 times more galaxies than were previously thought to exist.

An international team of astronomers used deep-space images and other data from the Hubble Space Telescope to create a 3D map of the known universe, which contains about 100 to 200 billion galaxies. In particular, they relied on Hubble's Deep Field images, which revealed the most distant galaxies ever seen with a telescope. [Video: Our Universe Has Trillions of Galaxies, Hubble Study]

Then, the researchers incorporated new mathematical models to calculate where other galaxies that have not yet been imaged by a telescope might exist. For the numbers to add up, the universe needs at least 10 times more galaxies than those already known to exist. But these unknown galaxies are likely either too faint or too far away to be seen with today's telescopes.
It boggles the mind that over 90 percent of the galaxies in the universe have yet to be studied," Christopher Conselice, a professor of astrophysics at the University of Nottingham in the U.K., who led the study, said in a statement. "Who knows what interesting properties we will find when we observe these galaxies with the next generation of telescopes.”

Looking far out into deep space also means looking back in time, because light takes a long time to travel across cosmic distances. During the study, Conselice and his team looked at parts of the universe up to 13 billion light-years away. Looking this far allowed the researchers to see partial snapshots of the evolution of the universe since 13 billion years ago, or less than 100 million years after the Big Bang.

They discovered that the early universe contained even more galaxies than it does today. Those distant galaxies were small and faint dwarf galaxies, they found. As the universe evolves, such galaxies merge together to form larger galaxies.
n a separate statement, Conselice said that the results are "very surprising as we know that, over the 13.7 billion years of cosmic evolution since the Big Bang, galaxies have been growing through star formation and mergers with other galaxies. Finding more galaxies in the past implies that significant evolution must have occurred to reduce their number through extensive merging of systems."

The results of the study are detailed in The Astrophysical Journal.
(FULL STORY)

Correlation between galaxy rotation and visible matter puzzles astronomers
[10/7/2016]
A new study of the rotational velocities of stars in galaxies has revealed a strong correlation between the motion of the stars and the amount of visible mass in the galaxies. This result comes as a surprise because it is not predicted by conventional models of dark matter.
Stars on the outskirts of rotating galaxies orbit just as fast as those nearer the centre. This appears to be in violation of Newton's laws, which predict that these outer stars would be flung away from their galaxies. The extra gravitational glue provided by dark matter is the conventional explanation for why these galaxies stay together. Today, our most cherished models of galaxy formation and cosmology rely entirely on the presence of dark matter, even though the substance has never been detected directly.
These new findings, from Stacy McGaugh and Federico Lelli of Case Western Reserve University, and James Schombert of the University of Oregon, threaten to shake things up. They measured the gravitational acceleration of stars in 153 galaxies with varying sizes, rotations and brightness, and found that the measured accelerations can be expressed as a relatively simple function of the visible matter within the galaxies. Such a correlation does not emerge from conventional dark-matter models.
(FULL STORY)

The Spooky Secret Behind Artificial Intelligence's Incredible Power
[10/7/2016]
Spookily powerful artificial intelligence (AI) systems may work so well because their structure exploits the fundamental laws of the universe, new research suggests.

The new findings may help answer a longstanding mystery about a class of artificial intelligence that employ a strategy called deep learning. These deep learning or deep neural network programs, as they're called, are algorithms that have many layers in which lower-level calculations feed into higher ones. Deep neural networks often perform astonishingly well at solving problems as complex as beating the world's best player of the strategy board game Go or classifying cat photos, yet know one fully understood why.

It turns out, one reason may be that they are tapping into the very special properties of the physical world, said Max Tegmark, a physicist at the Massachusetts Institute of Technology (MIT) and a co-author of the new research.
The laws of physics only present this "very special class of problems" — the problems that AI shines at solving, Tegmark told Live Science. "This tiny fraction of the problems that physics makes us care about and the tiny fraction of problems that neural networks can solve are more or less the same," he said. [Super-Intelligent Machines: 7 Robotic Futures]

Deep learning

Last year, AI accomplished a task many people thought impossible: DeepMind, Google's deep learning AI system, defeated the world's best Go player after trouncing the European Go champion. The feat stunned the world because the number of potential Go moves exceeds the number of atoms in the universe, and past Go-playing robots performed only as well as a mediocre human player.

But even more astonishing than DeepMind's utter rout of its opponents was how it accomplished the task.

"The big mystery behind neural networks is why they work so well," said study co-author Henry Lin, a physicist at Harvard University. "Almost every problem we throw at them, they crack."

For instance, DeepMind was not explicitly taught Go strategy and was not trained to recognize classic sequences of moves. Instead, it simply "watched" millions of games, and then played many, many more against itself and other players.

Like newborn babies, these deep-learning algorithms start out "clueless," yet typically outperform other AI algorithms that are given some of the rules of the game in advance, Tegmark said.

Another long-held mystery is why these deep networks are so much better than so-called shallow ones, which contain as little as one layer, Tegmark said. Deep networks have a hierarchy and look a bit like connections between neurons in the brain, with lower-level data from many neurons feeding into another "higher" group of neurons, repeated over many layers. In a similar way, deep layers of these neural networks make some calculations, and then feed those results to a higher layer of the program, and so on, he said.

Magical keys or magical locks?

To understand why this process works, Tegmark and Lin decided to flip the question on its head.

"Suppose somebody gave you a key. Every lock you try, it seems to open. One might assume that the key has some magic properties. But another possibility is that all the locks are magical. In the case of neural nets, I suspect it's a bit of both," Lin said.

One possibility could be that the "real world" problems have special properties because the real world is very special, Tegmark said.

Take one of the biggest neural-network mysteries: These networks often take what seem to be computationally hairy problems, like the Go game, and somehow find solutions using far fewer calculations than expected.

It turns out that the math employed by neural networks is simplified thanks to a few special properties of the universe. The first is that the equations that govern many laws of physics, from quantum mechanics to gravity to special relativity, are essentially simple math problems, Tegmark said. The equations involve variables raised to a low power (for instance, 4 or less). [The 11 Most Beautiful Equations]

What's more, objects in the universe are governed by locality, meaning they are limited by the speed of light. Practically speaking, that means neighboring objects in the universe are more likely to influence each other than things that are far from each other, Tegmark said.

Many things in the universe also obey what's called a normal or Gaussian distribution. This is the classic "bell curve" that governs everything from traits such as human height to the speed of gas molecules zooming around in the atmosphere.

Finally, symmetry is woven into the fabric of physics. Think of the veiny pattern on a leaf, or the two arms, eyes and ears of the average human. At the galactic scale, if one travels a light-year to the left or right, or waits a year, the laws of physics are the same, Tegmark said.

Tougher problems to crack

All of these special traits of the universe mean that the problems facing neural networks are actually special math problems that can be radically simplified.

"If you look at the class of data sets that we actually come across in nature, they're way simpler than the sort of worst-case scenario you might imagine," Tegmark said.

There are also problems that would be much tougher for neural networks to crack, including encryption schemes that secure information on the web; such schemes just look like random noise.

"If you feed that into a neural network, it's going to fail just as badly as I am; it's not going to find any patterns," Tegmark said.

While the subatomic laws of nature are simple, the equations describing a bumblebee flight are incredibly complicated, while those governing gas molecules remain simple, Lin added. It's not yet clear whether deep learning will perform just as well describing those complicated bumblebee flights as it will describing gas molecules, he said.

"The point is that some 'emergent' laws of physics, like those governing an ideal gas, remain quite simple, whereas some become quite complicated. So there is a lot of additional work that needs to be done if one is going to answer in detail why deep learning works so well." Lin said. "I think the paper raises a lot more questions than it answers!"
(FULL STORY)

Science of Disbelief: When Did Climate Change Become All About Politics?
[10/7/2016]
Barely over a quarter of Americans know that almost all climate scientists agree that climate change is happening and that humans are to blame, a new Pew Research Center survey finds.

The survey also reveals a strong split between political liberals and political conservatives on the issue. While 55 percent of liberal Democrats say climate scientists are trustworthy, only 15 percent of conservative Republicans say the same.

The findings are in line with the results of other surveys of the politics of climate change, said Anthony Leiserowitz, director of the Yale Program on Climate Change Communication. Leiserowitz was not involved in the Pew study, but he and his colleagues conduct their own surveys on climate attitudes.

"The overwhelming finding that they see here is that there's a strong partisan divide on climate change, and that is a pattern we first saw emerge in 1997," Leiserowitz told Live Science.

The partisan gap isn't necessarily set in stone, however, Leiserowitz said. It's actually been narrowing recently — but it remains to be seen how the result of this year's presidential election may affect the divide.

Prior to 1997, the two major parties held similar beliefs on the occurrence of human-caused climate change, Leiserowitz said. Right around that time, then-President Bill Clinton and then-Vice President Al Gore took on the issue and pushed for the Kyoto Protocol, an international climate treaty meant to reduce greenhouse gas emissions.

"That's the moment when they come back and say, 'This is a global problem, and the U.S. needs to be part of the solution,' that the two parties begin to diverge," Leiserowitz said.

Since then, the American public's belief that climate change is real has fluctuated. Belief that climate change exists and that it's human-caused began to rise around 2004 and hit a peak around 2007, driven by media coverage of California's climate initiatives under Republican Gov. Arnold Schwarzenegger and the Hollywood film "The Day After Tomorrow," released in 2004. (Really: Leiserowitz's research found that Americans who saw the blockbuster were moved to think climate change is a problem. Al Gore's film "An Inconvenient Truth" was released in 2006 but was seen by far fewer people, Leiserowitz said.)

These numbers waned during the 2008 recession, when the media abruptly stopped talking about climate change and the conservative tea-party wing of the Republican Party gained more power, Leiserowitz said. Belief in man-made climate change bottomed out in 2010 and 2011 but has been creeping upward since then, he said. [6 Unexpected Effects of Climate Change]

"That uptick is not coming from Democrats," he said. "Democrats have not really changed much at all. Independents — their belief that global warming is happening — has increased. But the real shift is happening among Republicans, and most interesting, the biggest shift — 19 percentage points — is among conservative Republicans."

But even with those increases, because the percentage of conservative Americans who believed in man-made climate change was so small, the overall proportion of conservatives who believe climate change is caused by human activity is still small. The new Pew survey, conducted between May 10 and June 6, 2016, found that 48 percent of Americans overall believe that the Earth is warming mostly because of human activity. Seventy-nine percent of liberal Democrats held that belief, compared with 63 percent of moderate Democrats, 34 percent of moderate Republicans and 15 percent of conservative Republicans.

Climate scientists have the trust of far more people on the left side of the political spectrum than the right. Only 9 percent of conservative Republicans believe that climate scientists' findings are usually influenced by the best available evidence, compared with 55 percent of liberal Democrats. Only 7 percent of conservative Republicans and 15 percent of moderate Republicans think climate scientists are motivated by concern for the public's best interest, compared with 31 percent of moderate Democrats and 41 percent of liberal Democrats.

Still, up until last spring, the trends were "moving in a more science-aligned direction," Leiserowitz said. Even members of the Republican establishment had been willing to discuss climate change as a problem, Leiserowitz said, citing former presidential candidate John McCain, who had sponsored and supported climate legislation in the U.S. Senate.

"Then, along comes Donald Trump, and he basically flips over all the card tables," Leiserowitz said. The candidate has called climate change a hoax on multiple occasions and once tweeted that "the concept of global warming was created by and for the Chinese in order to make U.S. manufacturing non-competitive." Trump has also been consistent in calling for less regulation of fossil fuel emissions. [Election Day 2016: A Guide to the When, Why, What and How]

"It's not clear where he has taken the Republican base," Leiserowitz said. The outcome of the election alone won't be enough to determine what kind of collateral damage climate opinion will accrue. Should Trump lose, Leiserowitz said, the Republican Party will have to decide whether to move even more rightward or whether to take a more centrist tack.

However, Americans' views aren't quite as extreme as the political class would make it seem, Leiserowitz said. Yale's surveys found that about 17 percent of Americans are alarmed about climate change, and 10 percent are entirely dismissive. The other 63 percent believe in, and are worried about, climate change to differing degrees.

"Most Americans are actually in the middle, and more of those people in the middle are leaning pretty well toward the scientific consensus," Leiserowitz said.
(FULL STORY)

Eyeballing Proxima b: Probably Not a Second Earth
[10/7/2016]
In our profound quest to discover strange new worlds, we've inevitably been trying to find alien planets that possess any Earth-like similarities. Now, with the incredible find of an Earth-mass exoplanet orbiting a neighboring star at just the right distance for liquid water to persist on its surface, hopes are high that we may have discovered an "Earth 2.0" right on our galactic doorstep.

But in our rush to assign any terrestrial likeness to this small exoplanet, we often forget that just because it's in the right place and is (apparently) the right mass, it likely has very little resemblance to Earth. And even if it does possess water, it could still be a very strange world indeed.

In a new study headed by scientists at the French National Center for Scientific Research (CNRS) and Cornell University, computer simulations have been run to figure out the possible characteristics of the small rocky world that was discovered orbiting the red dwarf star Proxima Centauri. Located only 4.2 light-years from Earth, the so-called Proxima b was discovered by the ESO's La Silla observatory in Chile and astronomers of the Pale Red Dot campaign to much excitement in August.

By measuring the slight wobbles of Proxima Centauri, the telescope was able not only to decipher the mass of the exoplanet, it could also calculate its orbital period. With this information, the researchers realized that the world was orbiting the red dwarf within the star's "habitable zone." The habitable zone of any star is the distance at which a planet can orbit that is not too hot and not too cold for liquid water to persist on its surface.

The implications are clear: on Earth, where there's liquid water, there's life — if there's liquid water on Proxima b, perhaps there's life there too. And, if we look for enough into the future, perhaps we might one day become an interstellar species and set up home there.

But it's worth remembering that we currently have very little information about Proxima b. We know that it has an orbital period of a little over 11 days (yes, a "year" on Proxima b is only 11 days).* We know it orbits within the star's habitable zone. We also know its approximate mass. However, we don't know whether or not it has an atmosphere. Also, we don't know Proxima b's physical size. If we don't know its physical size, we can't calculate its average density and therefore there's ambiguity as to what materials it contains. So, in an effort to confront this ambiguity, the researchers ran some simulations of a 1.3 Earth-mass world (the approximate mass of Proxima b) in orbit around a red dwarf star to see what form it might take.

Assuming the rocky world has the smallest physical size allowed for its mass (94 percent Earth's diameter), according to planetary formation models this would consist of a metal core, making up for 65 percent of the mass of the entire planet. The outer layers would consist of rocky mantle and very little water (if any). In this scenario, Proxima b would be a rocky, barren and dry world, resembling a massive Mercury. Last time we checked in on Mercury, it didn't appear very "habitable."

But this is just one possibility. The researchers then shifted the scale to the other extreme. What would happen if the physical size of the planet was pushed to the maximum? Well, the mass of Proxima b could support a world that is 40 percent bigger than Earth. Now things get interesting.

In this scenario, Proxima b would be a lot less dense, meaning there would be less rock and metal. A huge proportion of the planet's mass would consist of water. In fact, 50 percent of the entire planet's mass would be water. This would be a "water world" in the strongest possible sense.

Somewhere between these two scenarios — either a dense and barren rock or bloated water world — is the highly sought-after "Earth 2.0"; basically a world with a small metal core, rocky mantle and plentiful oceans flooding the surface. It's this exoplanetary compromise that you regularly see in artistic impressions of Proxima b, the temperate alien world that looks like Earth:

Alas, this version of Proxima b is just one possibility over a huge range of scenarios. So, yeah, from this study alone, Proxima b is probably not very Earth-like. But wait, there's more.

Just because a planet orbits its star in the habitable zone, it doesn't mean it has the same life-giving qualities as Earth (keep in mind that both Mars and Venus also orbit the sun within our solar system's habitable zone).

Proxima b orbits very close to its star. It's the nature of the beast; red dwarf stars are small and therefore cooler than sun-like stars. Proxima Centauri's habitable zone is therefore one hell of a lot more compact than our sun's. The Proxima Centauri habitable zone is well within the orbit of Mercury. If a planet got that close to our hot sun, it would be burnt to a crisp; for a planet in orbit around Proxima Centauri, this location is an oasis.

But when you orbit so close to a red dwarf, a planet starts to succumb to some tidal difficulties. One face of an orbiting planet around a red dwarf will be constantly facing the star, meaning the planet's spin matches its orbital period. One hemisphere of the planet is in constant light while the other hemisphere is in constant darkness — a situation called "tidal locking."

So, in this case, let's imagine the orbiting exoplanet really is a textbook "Earth-like" world with just the right composition. A world with an iron core, rocky mantle and enough water on the surface to create liquid water oceans that could support life. But this world is tidally locked with its star — that's got to cause some problems, right?

Let's assume that this planet somehow possesses an atmosphere (more on that later), to have one hemisphere being constantly heated while the other hemisphere is constantly frozen certainly doesn't sound like a good time. Many simulations have been run in an attempt to model the complexities of the atmospheric conditions in this situation and most outcomes aren't good. Some scenarios predict planet-wide hurricanes that act like a blast oven, other scenarios predict a dry wasteland on the star-facing hemisphere and a frozen solid dark hemisphere.

There are, however, some planetary models that could save the day for these unfortunate wannabe "second Earths." One fun prediction is the possible existence of "Eyeball Earths." These peculiar planets would still be tidally locked to their star, with one hemisphere a constantly baked desert and the other hemisphere in deep freeze, but there would be a region between day and night where the conditions are just right for a liquid water ocean to circle the world between the darkness and light. Oh, and it would look like an eyeball, seriously:

In other research around atmospheric dynamics of tidally locked exoplanets, there could be a situation where the world has efficient "air conditioning" — hot air from one hemisphere is distributed about the planet in such a way to balance global temperatures. But this assumes a high degree of friction between the lower atmosphere and a craggy, rocky surface and efficient high-altitude air flow.

But the ultimate kicker when considering "Earth-like" exoplanets around red dwarf stars is that just because red dwarfs are small, it doesn't mean they are docile. In fact, red dwarf stars can be downright violent, frequently erupting with powerful flares, flooding any nearby planets with ionizing radiation. This radiation, plus inevitably powerful stellar winds, would likely blow any atmosphere away from our hypothetical burgeoning Earth 2.0. Without an atmosphere, the only vaguely habitable location on that planet would be under the surface, perhaps in a sub-surface ocean protected by an icy crust like Jupiter's moon Europa.

But, like Earth, if these planets have a powerful global magnetosphere, perhaps the worst of the stellar storm can be deflected and an atmosphere could form, who knows?

Though there are many challenges facing our search for "Earth 2.0," we are only just beginning our quest to seek out alien worlds orbiting other stars. Yes, it is an incredible stroke of luck to find a small world orbiting a neighboring star, but as red dwarfs are the most populous type of star in our galaxy, the odds are that a handful may well have just the right ingredients to support a habitable atmosphere. But is Proxima b one of those diamonds in the rough?

For now, with the tools at our disposal, we simply do not know. Perhaps with the launch of NASA's James Webb Space Telescope in 2018 we might be able to tease out the spectroscopic fingerprint of an atmosphere, but that would likely be beyond its capabilities. So we might just have to send an interstellar probe there to find out if Proxima b is really the habitable exoplanet everyone hopes it will be.
(FULL STORY)

Does the Drake Equation Confirm There Is Intelligent Alien Life in the Galaxy?
[10/6/2016]
The Drake Equation, written by astrophysicist Frank Drake in 1961, is a probabilistic equation to come up with an estimate of the number of intelligent, technological civilizations that should be in the Milky Way—and by extension, the universe. It is the foundation for a number of statistical models that suggest intelligent alien life should be widespread throughout the galaxy. In 1961, Drake's original estimate for the number of intelligent civilizations in our galaxy was between 20 and 50,000,000. As a new episode of PBS's Space Time points out, we have significantly refined our estimates for the number of potentially habitable planets in the Milky Way thanks to the Kepler planet-hunting mission. (We think there are around 40 billion rocky planets orbiting within the habitable zone of their parent stars.)
What we still struggle with is pinning down the probability that life will spring from organic compounds, a process known as abiogenesis, and the probability that basic microbial life will eventually evolve into an intelligent species. To help dial in this estimate a bit more, astrophysicists Adam Frank and Woodruff Sullivan asked how small the intelligent life probability would need to be if we are in fact the only technologically advanced species in the entire universe.

They concluded that if only one intelligent civilization ever existed in the history of the known universe (humans, and nothing else ever before), then the probability that a habitable planet produces intelligent life would have to be less than 1 in 400 billion trillion, or 2.5 x 10^-24—and, even if this were the case, there would still only be a 1 percent chance that no technological civilization ever existed other than humans. This is such a ludicrously small probability that astrophysicists are forced to conclude that we are not the only intelligent civilization to ever exist.

If we narrow the focus to just the Milky Way, then there is still only a 1 in 60 billion chance that a habitable planet produces an advanced civilization, assuming that we are the only such civilization to ever exist in the galaxy. Most people therefore conclude that there must be other intelligent civilizations in our galaxy, if not now then at some point in the past. But we have never detected any, and therein lies the Fermi paradox.

Check out the Space Time video above to learn more about the likelihood that we are not alone in our galaxy, and be sure to stay tuned for the bonus question at the end of the episode. If you get it correct, PBS will send you a Space Time t-shirt free of charge.
(FULL STORY)

Scientists build world's smallest transistor
[10/6/2016]
Silicon transistors have been getting smaller and smaller, packing more computing power into smaller dimensions all while using less energy. But silicon transistors can't get much smaller.

To keep the trend going, scientists have turned to silicon alternatives. Recently, a team scientists set a new record for world's smallest transistor using a pair of novel materials, carbon nanotubes and molybdenum disulfide. The combination belongs to a class of materials called transition metal dichalcogenides, or TMDs
Molybdenum disulfide, or MoS2, is an engine lubricant that scientists believe has tremendous potential in the field of electronics. Like silicon, MoS2 boasts a crystalline lattice structure. But electrons don't move as easily through MoS2 as they do through silicon.

Transistors rely on a gate to control the flow of electricity through its terminals. But because silicon allows for such a free flow of electrons, the particles barge through the doors when the gate becomes too small.

"This means we can't turn off the transistors," Sujay Desai, a graduate student at the Department of Energy's Lawrence Berkeley National Laboratory, explained in a news release. "The electrons are out of control."

When electrons are out of control, transistors leak energy.

With MoS2, scientists were able to make the gate -- and the transistor -- much smaller without making susceptible to gate-crashing electrons. In fact, Desai and his research partners built a transistor with a 1-nanometer gate. A single strand of human hair measures roughly 50,000 nanometers across.

While the feat is impressive, and the technology promising, researchers say there is much work to do.

"This work demonstrated the shortest transistor ever," Ali Javey, a professor of electrical engineering and computer sciences at the University of California, Berkeley. "However, it's a proof of concept. We have not yet packed these transistors onto a chip, and we haven't done this billions of times over."

If the technology is going to make in the electronics industry, researchers will need to find new ways to produce the materials at scale.

"Large-scale processing and manufacturing of TMD devices down to such small gate lengths will require future innovations," said Moon Kim, professor of materials science and engineering at the University of Texas, Dallas.

Still, researchers are hopeful the breakthrough will translate to smaller more efficient computer chips, and ultimately, smaller, more efficient electronics.

"A cellphone with this technology built in would not have to be recharged as often," Kim said.
Molybdenum disulfide, or MoS2, is an engine lubricant that scientists believe has tremendous potential in the field of electronics. Like silicon, MoS2 boasts a crystalline lattice structure. But electrons don't move as easily through MoS2 as they do through silicon.

Transistors rely on a gate to control the flow of electricity through its terminals. But because silicon allows for such a free flow of electrons, the particles barge through the doors when the gate becomes too small.

"This means we can't turn off the transistors," Sujay Desai, a graduate student at the Department of Energy's Lawrence Berkeley National Laboratory, explained in a news release. "The electrons are out of control."

When electrons are out of control, transistors leak energy.

With MoS2, scientists were able to make the gate -- and the transistor -- much smaller without making susceptible to gate-crashing electrons. In fact, Desai and his research partners built a transistor with a 1-nanometer gate. A single strand of human hair measures roughly 50,000 nanometers across.

While the feat is impressive, and the technology promising, researchers say there is much work to do.

"This work demonstrated the shortest transistor ever," Ali Javey, a professor of electrical engineering and computer sciences at the University of California, Berkeley. "However, it's a proof of concept. We have not yet packed these transistors onto a chip, and we haven't done this billions of times over."

If the technology is going to make in the electronics industry, researchers will need to find new ways to produce the materials at scale.

"Large-scale processing and manufacturing of TMD devices down to such small gate lengths will require future innovations," said Moon Kim, professor of materials science and engineering at the University of Texas, Dallas.

Still, researchers are hopeful the breakthrough will translate to smaller more efficient computer chips, and ultimately, smaller, more efficient electronics.

"A cellphone with this technology built in would not have to be recharged as often," Kim said.
(FULL STORY)

'Alien Megastructure' Star Keeps Getting Stranger
[10/5/2016]
The more scientists learn about "Tabby's Star," the more mysterious the bizarre object gets.

Newly analyzed observations by NASA's planet-hunting Kepler space telescope show that the star KIC 8462852 — whose occasional, dramatic dips in brightness still have astronomers scratching their heads — has also dimmed overall during the last few years.

"The steady brightness change in KIC 8462852 is pretty astounding," study lead author Ben Montet, of the California Institute of Technology in Pasadena, said in a statement.

"Our highly accurate measurements over four years demonstrate that the star really is getting fainter with time," Montet added. "It is unprecedented for this type of star to slowly fade for years, and we don't see anything else like it in the Kepler data."

KIC 8462852 hit the headlines last September, when a team of astronomers led by Tabetha Boyajian of Yale University announced that the star had dimmed dramatically several times over the past few years — in one case, by a whopping 22 percent.

These brightness dips are too significant to be caused by an orbiting planet, so scientists began suggesting alternative explanations. Perhaps a planet or a family of orbiting comets broke up, for example, and the ensuing cloud of dust and fragments periodically blocks the star's light. Or maybe some unknown object in the depths of space between the star and Earth is causing the dimming.

The brightness dips are even consistent with a gigantic energy-collecting structure built by an intelligent civilization — though researchers have been keen to stress that this "alien megastructure" scenario is quite unlikely.

The weirdness increased in January 2016, when astronomer Bradley Schaefer of Louisiana State University reported that KIC 8462852 also seems to have dimmed overall by 14 percent between 1890 and 1989.

This conclusion is based on Schaefer's analysis of photographic plates of the night sky that managed to capture Tabby's Star, which lies about 1,500 light-years from Earth. Some other astronomers questioned this interpretation, however, suggesting that differences in the instruments used to photograph the sky over that time span may be responsible for the apparent long-term dimming.

So Montet and co-author Joshua Simon, of the Observatories of the Carnegie Institution of Washington, decided to scour the Kepler data for any hint of the trend Schaefer spotted. And they found more than just a hint.

Kepler observed KIC 8462852, along with about 150,000 other stars, from 2009 through 2013. During the first three years of that time span, KIC 8462852 got nearly 1 percent dimmer, Montet and Simon found. The star's brightness dropped by a surprising 2 percent over the next six months, and stayed level for the final six months of the observation period. (Kepler has since moved on to a new mission called K2, during which the telescope is hunting for exoplanets on a more limited basis and performing a variety of other observations.)

"This star was already completely unique because of its sporadic dimming episodes," Simon said in the same statement. "But now we see that it has other features that are just as strange, both slowly dimming for almost three years and then suddenly getting fainter much more rapidly."

Montet and Simon said they don't know what's behind the weird behavior of Tabby's Star, but they hope their results, which have been accepted for publication in The Astrophysical Journal, help crack the case eventually.

"It's a big challenge to come up with a good explanation for a star doing three different things that have never been seen before," Montet said. "But these observations will provide an important clue to solving the mystery of KIC 8462852."
(FULL STORY)

What's Out There? 'Star Men' Doc Tackles Life Questions Through Science
[10/5/2016]
The documentary "Star Men," which has just begun to play in select theatres in the United States, uses the life stories of four prominent astronomers to take a compassionate look at aging, death and humanity's search for meaning.

Following a screening of "Star Men" at the California Institute of Technology (Caltech) in Pasadena last month, one of the film's subjects, astronomer Neville (Nick) Woolf, said that when the project began he thought it would be a science documentary set against the backdrop of the American Southwest.

Instead, he was surprised to see that the film is actually centered on the 50-year friendship among himself and three colleagues — Roger Griffin, Donald Lynden-Bell and Wallace (Wal) Sargent — who worked together at Caltech in the early 1960s.
(FULL STORY)

Evidence for new form of matter-antimatter asymmetry observed
[10/4/2016]
Like two siblings with divergent personalities, a type of particle has shown signs of behaving differently than its antimatter partner. It’s the first time evidence of matter-antimatter differences have been detected in decays of a baryon — a category of particle that includes protons and neutrons. Such matter-antimatter discrepancies are key to explaining how the universe came to be made mostly of matter, scientists believe.

The result is “the first measurement of its kind,” says theoretical physicist Yuval Grossman of Cornell University. “Wow, we can actually see something that we’ve never seen before.”

Evidence of matter-antimatter differences in decays of baryons — particles which are composed of three smaller particles known as quarks — has eluded scientists until now. Previous experiments have found differences between matter and antimatter varieties of mesons, which are made up of one quark and one antiquark, but never in baryons.

For most processes, the laws of physics would be the same if matter were swapped with antimatter and the universe’s directions were flipped, as if reflected in a mirror. But when this principle, known as CP symmetry (for “charge parity”), is violated, matter and antimatter act differently. Now, scientists have found hints of CP violation in the decays of a particle known as a lambda-b baryon.

Scientists with the LHCb experiment, located at the Large Hadron Collider near Geneva, reported the result online September 16 at arXiv.org. They found that when the lambda-b baryon decays, the particles produced by the decay speed away at different angles and momenta for matter and antimatter versions of the baryon. (LHCb scientists declined to comment for this article, citing the embargo policy of Nature Physics, the journal to which the paper was submitted.)

After the Big Bang, the universe initially held equal parts antimatter and matter. But as the universe evolved, the laws of physics favored matter through CP violation, and antimatter became a rarity. Scientists’ well-tested theory of particle physics, the standard model, includes some CP violation, but not enough to explain the current imbalance. So physicists are searching for additional sources of the discrepancy.

It’s not surprising that differences in matter and antimatter appeared in baryons as well as mesons, says theoretical physicist David London of the University of Montreal. But precise measurements of baryons might eventually reveal deviations from the predictions of the standard model. Such a result could point the way to additional asymmetry that allowed the universe as we know it to form. “It's just the first step, and hopefully there will be more such measurements,” says London.
(FULL STORY)

Giant hidden Jupiters may explain lonely planet systems
[10/3/2016]
Lonely planets can blame big, pushy bullies. Giant planets may bump off most of their smaller brethren, partly explaining why the Kepler space telescope has seen so many single-planet systems.

Of the thousands of planetary systems Kepler has discovered, about 80 per cent appear as single planets passing in front of their stars. The rest feature as many as seven planets – a distinction dubbed the Kepler dichotomy.

Recent studies suggest even starker differences. While multiple-planet systems tend to have circular orbits that all lie in the same plane – like our solar system – the orbits of singletons tend to be more elliptical and are often misaligned with the spins of their stars.

Now, a pair of computer simulations suggest that hidden giants may lurk in these single systems. We wouldn’t be able to see them; big, Jupiter-like planets in wide orbits would take too long for Kepler to catch, and they may not have orbits that cause them to pass in front of their stars in our line of sight. But if these unseen bullies are there, they may have removed many of the smaller planets in closer orbits, leaving behind the solitary worlds that Kepler sees.




The simulations show that gravitational interactions involving giants in outer orbits can eject smaller planets from the system, nudge them into their stars or send them crashing into each other.

Pushy planets
“There are bigger things out there trying to pull you around,” says Chelsea Huang at the University of Toronto, Canada. She and her team also showed the giants pull the few remaining inner planets into more elliptical and inclined orbits – the same kind seen in many of the single systems Kepler has spotted.

Alex Mustill at Lund Observatory in Sweden and his colleagues mimicked more general scenarios, including planets orbiting a binary star system, and got similar results. The studies complement each other, say Huang and Mustill.

“We know these configurations have to occur in some fraction of exoplanet systems,” Mustill says.

But that doesn’t mean they’re universal. “They don’t occur all the time, and this is one reason why you can’t explain the large number of single planets purely through this mechanism,” Mustill says. According to his analysis, bullying giants can only account for about 18 per cent of Kepler’s singles.

To confirm their proposed mechanism, the researchers must wait until next year for the launch of the Transiting Exoplanet Survey Satellite (TESS), which will target closer and brighter systems – and thus be easier for follow-up observations to uncover the bully planets.
(FULL STORY)

Rarest nucleus reluctant to decay
[10/3/2016]
Nature’s rarest type of atomic nucleus is not giving up its secrets easily.

Scientists looking for the decay of an unusual form of the element tantalum, known as tantalum-180m, have come up empty-handed. Tantalum-180m’s hesitance to decay indicates that it has a half-life of at least 45 million billion years, Bjoern Lehnert and colleagues report online September 13 at arXiv.org. “The half-life is longer than a million times the age of the universe,” says Lehnert, a nuclear physicist at Carleton University in Ottawa. (Scientists estimate the universe’s age at 13.8 billion years.)

Making up less than two ten-thousandths of a percent of the mass of the Earth’s crust, the metal tantalum is uncommon. And tantalum-180m is even harder to find. Only 0.01 percent of tantalum is found in this state, making it the rarest known long-lived nuclide, or variety of atom.

Tantalum-180m is a bit of an oddball. It is what’s known as an isomer — its nucleus exists in an “excited,” or high-energy, configuration. Normally, an excited nucleus would quickly drop to a lower energy state, emitting a photon — a particle of light — in the process. But tantalum-180m is “metastable” (hence the “m” in its name), meaning that it gets stuck in its high-energy state.
Tantalum-180m is thought to decay by emitting or capturing an electron, morphing into another element — either tungsten or hafnium — in the process. But this decay has never been observed. Other unusual nuclides, such as those that decay by emitting two electrons simultaneously, can have even longer half-lives than tantalum-180m. But tantalum-180m is unique — it is the longest-lived isomer found in nature.

“It’s a very interesting nucleus,” says nuclear physicist Eric Norman of the University of California, Berkeley, who was not involved with the study. Scientists don’t have a good understanding of such unusual decays, and a measurement of the half-life would help scientists pin down the details of the process and the nucleus’ structure.

Lehnert and colleagues observed a sample of tantalum with a detector designed to catch photons emitted in the decay process. After running the experiment for 176 days, and adding in data from previous incarnations of the experiment, the team saw no evidence of decay. The half-life couldn’t be shorter than 45 million billion years, the scientists determined, or they would have seen some hint of the process. “They did a state-of-the-art measurement,” says Norman. “It's a very difficult thing to see.”

The presence of tantalum-180m in nature is itself a bit of a mystery, too. The element-forging processes that occur in stars and supernovas seem to bypass the nuclide. “People don’t really understand how it is created at all,” says Lehnert.

Tantalum-180m is interesting as a potential energy source, says Norman, although “it’s kind of a crazy idea.” If scientists could find a way to tap the energy stored in the excited nucleus by causing it to decay, it might be useful for applications like nuclear lasers, he says.
(FULL STORY)

Weird Science: 3 Win Nobel for Unusual States of Matter
[10/3/2016]
How is a doughnut like a coffee cup? The answer helped three British-born scientists win the Nobel prize in physics Tuesday.

Their work could help lead to more powerful computers and improved materials for electronics.

David Thouless, Duncan Haldane and Michael Kosterlitz, who are now affiliated with universities in the United States, were honored for work in the 1970s and '80s that shed light on strange states of matter.

"Their discoveries have brought about breakthroughs in the theoretical understanding of matter's mysteries and created new perspectives on the development of innovative materials," the Royal Swedish Academy of Sciences said.

Thouless, 82, is a professor emeritus at the University of Washington. Haldane, 65, is a physics professor at Princeton University in New Jersey. Kosterlitz, 73, is a physics professor at Brown University in Providence, Rhode Island, and currently a visiting lecturer at Aalto University in Helsinki.

The 8 million kronor ($930,000) award was divided with one half going to Thouless and the other to Haldane and Kosterlitz.

They investigated strange states of matter like superconductivity, the ability of a material to conduct electricity without resistance.

Their work called on an abstract mathematical field called topology, which presents a particular way to describe some properties of matter. In this realm, a doughnut and a coffee cup are basically the same thing because each contains precisely one hole. Topology describes properties that can only change in full steps; you can't have half a hole.

"Using topology as a tool, they were able to astound the experts," the academy said.

For example, in the 1970s, Kosterlitz and Thouless showed that very thin layers of material — essentially containing only two dimensions rather than three — could undergo fundamental changes known as phase transitions. One example is when a material is chilled enough that it can start showing superconductivity.

Scientists had thought phase changes were impossible in just two dimensions, but the two men showed that changes do occur and that they were rooted in topology.

"This was a radically new way of looking at phases of matter," said Sankar Das Sarma, a physicist at the University of Maryland in College Park.

"Now everywhere we look we find that topology affects the physical world," he said.

Haldane was cited for theoretical studies of chains of magnetic atoms that appear in some materials. He said he found out about the prize through an early morning telephone call.

"My first thought was someone had died," he told The Associated Press. "But then a lady with a Swedish accent was on the line. It was pretty unexpected."

Kosterlitz, a dual U.K.-U.S. citizen, said he got the news in a parking garage while heading to lunch in Helsinki.

"I'm a little bit dazzled. I'm still trying to take it in," he told AP.

Nobel committee member David Haviland said this year's prize was more about theoretical discoveries even though they may result in practical applications.

"These theoreticians have come up with a description of these materials using topological ideas, which have proven very fruitful and has led to a lot of ongoing research about material properties," he said.

Haldane said the award-winning research is just starting to have practical applications.

"The big hope is that some of these new materials could lead to quantum computers and other new technology," he said.

Quantum computers could be powerful tools, but Kosterlitz was not so sure about the prospects for developing them.

"I've been waiting for my desktop quantum computer for years, but it's still showing no signs of appearing," he said. "At the risk of making a bad mistake, I would say that this quantum computation stuff is a long way from being practical."

This year's Nobel Prize announcements started Monday with the medicine award going to Japanese biologist Yoshinori Ohsumi for discoveries on autophagy, the process by which a cell breaks down and recycles content.

The chemistry prize will be announced on Wednesday and the Nobel Peace Prize on Friday. The economics and literature awards will be announced next week.

Besides the prize money, the winners get a medal and a diploma at the award ceremonies on Dec. 10, the anniversary of prize founder Alfred Nobel's death in 1896.
(FULL STORY)

Methane didn’t warm ancient Earth, new simulations suggest
[9/27/2016]
Methane wasn’t the cozy blanket that kept Earth warm hundreds of millions of years ago when the sun was dim, new research suggests.

By simulating the ancient environment, researchers found that abundant sulfate and scant oxygen created conditions that kept down levels of methane — a potent greenhouse gas — around 1.8 billion to 800 million years ago (SN: 11/14/15, p. 18). So something other than methane kept Earth from becoming a snowball during this dim phase in the sun’s life. Researchers report on this new wrinkle in the so-called faint young sun paradox (SN: 5/4/13, p. 30) the week of September 26 in the Proceedings of the National Academy of Sciences.

Limited oxygen increases the production of microbe-made methane in the oceans. With low oxygen early in Earth’s history, many scientists suspected that methane was abundant enough to keep temperatures toasty. Oxygen may have been too sparse, though. Recent work suggests that oxygen concentrations at the time were as low as a thousandth their present-day levels (SN: 11/28/14, p. 14).

Stephanie Olson of the University of California, Riverside and colleagues propose that such low oxygen concentrations thinned the ozone layer that blocks methane-destroying ultraviolet rays. They also estimate that high concentrations of sulfate in seawater at the time helped sustain methane-eating microbes. Together, these processes severely limited methane to levels similar to those seen today — far too low to keep Earth defrosted.
(FULL STORY)

New 'Artificial Synapses' Pave Way for Brain-Like Computers
[9/27/2016]
A brain-inspired computing component provides the most faithful emulation yet of connections among neurons in the human brain, researchers say.

The so-called memristor, an electrical component whose resistance relies on how much charge has passed through it in the past, mimics the way calcium ions behave at the junction between two neurons in the human brain, the study said. That junction is known as a synapse. The researchers said the new device could lead to significant advances in brain-inspired — or neuromorphic — computers, which could be much better at perceptual and learning tasks than traditional computers, as well as far more energy efficient.
(FULL STORY)

Stephen Hawking Is Still Afraid of Aliens
[9/25/2016]
Humanity should be wary of seeking out contact with alien civilizations, Stephen Hawking has warned once again.

In 2010, the famed astrophysicist said that intelligent aliens may be rapacious marauders, roaming the cosmos in search of resources to plunder and planets to conquer and colonize. He reiterates that basic concern in "Stephen Hawking's Favorite Places," a new documentary streaming now on the CuriosityStream video service.

"One day, we might receive a signal from a planet like this," Hawking says in the documentary, referring to a potentially habitable alien world known as Gliese 832c. "But we should be wary of answering back. Meeting an advanced civilization could be like Native Americans encountering Columbus. That didn't turn out so well."
(FULL STORY)

The Ig Nobel Prize Winners of 2016
[9/23/2016]
The 2016 Ig Nobel Prizes were announced on Sept. 22, revealing the honorees who were deemed to have made achievements that make people laugh and then make them think. In the 26th year of the ceremony, those honored did not disappoint. From rats wearing polyester pants and rock personalities to the science of BS and the satisfaction of mirror scratching, here's a look at this year's winners.
(FULL STORY)

Teleported Laser Pulses? Quantum Teleportation Approaches Sci-Fi Level
[9/23/2016]
Crewmembers aboard the starship Enterprise on the iconic TV series "Star Trek" could "beam up" from planets to starships, making travel between great distances look easy. While these capabilities are clearly fictional, researchers have now performed "quantum teleportation" of laser pulses over several miles within two city networks of fiber optics.

Although the method described in the research will not replace city subways or buses with transporter booths, it could help lead to hack-proof telecommunications networks, as well as a "quantum internet" to help extraordinarily powerful quantum computers talk to one another.

Teleporting an object from one point in the universe to another without it moving through the space in between may sound like science fiction, but quantum physicists have actually been experimenting with quantum teleportation since 1998. The current distance record for quantum teleportation — a feat announced in 2012 — is about 89 miles (143 kilometers), between the two Canary Islands of La Palma and Tenerife, off the northwest coast of Africa.

Quantum teleportation relies on the bizarre nature of quantum physics, which finds that the fundamental building blocks of the universe, such as subatomic particles, can essentially exist in two or more places at once. Specifically, quantum teleportation depends on a strange phenomenon known as "quantum entanglement," in which objects can become linked and influence each other instantaneously, no matter how far apart they are.

Currently, researchers cannot teleport matter (say, a human) across space, but they can use quantum teleportation to beam information from one place to another. The quantum teleportation of an electron, for example, would first involve entangling a pair of electrons. Next, one of the two electrons — the one to be teleported — would stay in one place while the other electron would be physically transported to whatever destination is desired.

Then, the fundamental details or "quantum state" of the electron to be teleported are analyzed — an act that also destroys its quantum state. Finally, that data is sent to the destination, where it can be used on the other electron to recreate the first one, so that it is indistinguishable from the original. For all intents and purposes, that electron has teleported. (Because the data is sent using regular signals such as light pulses or electrons, quantum teleportation can proceed no faster than the speed of light.)

Now, two research groups independently report quantum teleportation over several miles of fiber-optic networks in the cities of Hefei, China, and Calgary, Alberta. The scientists detailed their findings online Sept. 19 in two independent papers in the journal Nature Photonics.
(FULL STORY)

China Claims It Developed "Quantum" Radar To See Stealth Planes
[9/23/2016]
Beijing's state media has made the bold claim that a Chinese defense contractor successfully developed the world's first quantum radar system. The radar can allegedly detect objects at range of up to 62 miles. If true, this would greatly diminish the value of so-called "stealth" aircraft, including the B-2 and F-22 Raptor fighter. But it's a pretty far-out claim.

Quantum radar is based on the theory of quantum entanglement and the idea that two different particles can share a relationship with one another to the point that, by studying one particle, you can learn things about the other particle—which could be miles away. These two particles are said to be "entangled".

In quantum radars, a photon is split by a crystal into two entangled photons, a process known as "parametric down-conversion." The radar splits multiple photons into entangled pairs—and A and a B, so to speak. The radar systems sends one half of the pairs—the As—via microwave beam into the air. The other set, the Bs, remains at the radar base. By studying the photons retained at the radar base, the radar operators can tell what happens to the photons broadcast outward. Did they run into an object? How large was it? How fast was it traveling and in what direction? What does it look like?

Quantum radars defeat stealth by using subatomic particles, not radio waves. Subatomic particles don't care if an object's shape was designed to reduce a traditional, radio wave-based radar signature. Quantum radar would also ignore traditional radar jamming and spoofing methods such as radio-wave radar jammers and chaff.

According to Global Times, the 14th Institute of China Electronics Technology Group Corporation (CETC) developed the radar system last month. The subdivision website describes the "14th Institute" as "the birthplace of Radar industry (sic) in China", employing 9,000 workers on a 2,000-acre research campus.

China isn't the only country working on quantum radar: Lockheed Martin was granted a patent on a theoretical design 2008. Lockheed's plans were more far-reaching, including the ability to "visualize useful target details through background and/or camouflaging clutter, through plasma shrouds around hypersonic air vehicles, through the layers of concealment hiding underground facilities, IEDs, mines, and other threats." In many ways, Lockheed's concept of quantum radar resembles the spaceship and handheld sensors on "Star Trek."

Since the 2008 patent, Lockheed's been silent on the subject of quantum radars. Given what a technological leap such a system would be, it's quite possible the research has gone "black"—highly classified and subject to a high level of secrecy.
(FULL STORY)

Earth Wobbles May Have Driven Ancient Humans Out of Africa
[9/22/2016]
Ancient human migrations out of Africa may have been driven by wobbles in Earth's orbit and tilt that led to dramatic swings in climate, a new study finds.

Modern humans first appeared in Africa about 150,000 to 200,000 years ago. It remains a mystery as to why it then took many millennia for people to disperse across the globe. Recent archaeological and genetic findings suggest that migrations of modern humans out of Africa began at least 100,000 years ago, but most humans outside of Africa most likely descended from groups who left the continent more recently — between 40,000 and 70,000 years ago.

Previous research suggested that shifts in climate might help explain why modern human migrations out of Africa happened when they did. For instance, about every 21,000 years, Earth experiences slight changes to its orbit and tilt. These series of wobbles, known as Milankovitch cycles, alter how much sunlight hits different parts of the planet, which in turn influences rainfall levels and the number of people any given region can support.
(FULL STORY)


News Archives