Whale algorithm could unlock secrets of their many dialects

Three long-finned pilot whales underwater. They look mid-size, for a whale, and have stubby noses. Their fins do look quite long, and curvy

Call it a whalegorithm. A computer has learned to suss out the different dialects of long-finned pilot whales. The approach is a step towards unlocking the secrets of how whales communicate with one another.

Some marine mammals, like sperm whales, develop distinct songs that are particular to their social groups. Just as a human might pick up an accent or a set of idioms from their parents, so too whales have their own cultures of communication.

Analysing whale song recordings can help us learn more about these differences. This process normally involves assessing recordings visually, with computers only used to check for specific features like whistles. But this means you might miss important clues to how the whales communicate, says Sarah Hallerberg at the Max Planck Institute for Dynamics and Self-Organization in Germany. “Some features that might seem very relevant to a human might be very different to the whale.”
Ensemble of sounds

Instead, Hallerberg and her colleagues built a “bag of calls” algorithm. The program listens to recordings of groups of animals, examining

Google accepts blame for its self-driving car causing crash


The AI in charge of one of Google’s cars drove into the side of a bus. The incident – which California’s Department of Motor Vehicles documented publicly yesterday – is the first clear-cut case of an accident caused by the tech giant’s self-driving technology.

The bus was driving straight ahead on Silicon Valley’s busy El Camino Real road when Google’s Lexus SUV pulled out into its side, crushing the car’s wing. The accident report says the car sustained some damage to a wheel, bodywork and side-mounted sensors. There were no injuries.

Google’s autonomous cars have been involved in 18 accidents in Mountain View since the company started testing its self-driving systems there in 2010. In all previous accidents, however, another vehicle struck the Google car while it was either stationary or moving slowly. This is the first time that a vehicle controlled by Google’s software seems to have been at fault.

“We clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision,” said Google in a statement.
No yielding

The number 22 bus was carrying 15 passengers on a route criss-crossing Silicon Valley from Palo Alto to San Jose. The Google car

How Astro Noise show interrogates the world of surveillance

Bed Down Location, showing time-lapse video projections of night skies in Yemen, Somalia and Pakistan

Images taken from hacked Israeli drones line the entrance at Astro Noise, the first solo exhibition by film-maker Laura Poitras.

But the show, at New York’s Whitney Museum of American Art, doesn’t hit high gear until you find yourself in the dark. There, you are confronted by a massive screen playing slow-motion footage of civilians gazing on the wreckage of the World Trade Center shortly after the attacks on 11 September 2001.

As the distorted soundtrack unwinds – the US national anthem, sung at a World Series baseball game that November – it occurs to you that the people you’re watching behaved in much the same way as you are now. Heads twist from side to side as they try to read meaning in the twisted material before them. Already, you feel implicated.

On the reverse side of the two–sided screen, two interrogations of Afghan prisoners loop endlessly, mixing the rattle of their chains to the still-audible national anthem. The faces of the US guards are obscured by digital smears, masks or shadows. We see the supposed militants quite clearly: soon they

People will follow a robot in an emergency – even if it’s wrong

A university student is holed up in a small office with a robot, completing an academic survey. Suddenly, an alarm rings and smoke fills the hall outside the door.

The student is forced to make a quick choice: escape via the clearly marked exit that they entered through, or head in the direction the robot is pointing, along an unknown path and through an obscure door.

That was the real choice posed to 30 subjects in a recent experiment at the Georgia Institute of Technology in Atlanta. The results surprised researchers: almost everyone elected to follow the robot – even though it was taking them away from the real exit.

“We were surprised,” says Paul Robinette, the graduate student who led the study. “We thought that there wouldn’t be enough trust, and that we’d have to do something to prove the robot was trustworthy.”

The unexpected result is another piece of a puzzle that roboticists are struggling to solve. If people don’t trust robots enough, then the bots probably won’t be successful in helping us escape disasters or otherwise navigate the real world. But we also don’t want people to follow the instructions of a malicious or buggy machine. To researchers, the nature of

This Is The Defining Photo Of Virtual Reality

Virtual reality has never looked cool. For all of the technology’s far-reaching promise, from the time when VR as we know it first began appearing regularly in film and TV in the late 1970s, the technology has always seemed pretty goofy: a person sitting slack-jawed, wearing a goofy helmet, spasmodically reacting to unreal apparations that only she or he could see.

Later attempts to make virtual reality look dark and edgy in the 1990s, in films such as Lawnmower Man and The Matrix and the other, often overlooked Keanu Reeves cyberspace movie, Johnny Mnemonic, succeeded less in making VR look appealing, than in making it look threatening and dystopic. Last year, virtual reality’s buffonishness reached an apex when Palmer Luckey, the young founder of Oculus VR, appeared on the cover of TIME Magazine in an illustration that was quickly mocked and Photoshopped into silly memes around the web.

Now that virtual reality is finally becoming widely publicly available — in the high-end forms of Facebook’s Oculus Rift, the HTC Vive Pre, and Sony’s Playstation VR; on the low end, Samsung Gear VR and Google Cardboard — it’s fitting that the defining image of our blossoming

Bike Tires That Can Survive The Himalayas

When you’re mountain biking in extreme conditions, you want tires that are light enough for nimble handling, sticky enough to grip rocky and rooty trails, and tough enough to last. Most can’t do it all: Beefy tires have bite, but they’re sluggish; light tires are nimble, but they wear out quickly or puncture.

Vittoria’s new Mezcal and Morsa tires eliminate that trade-off once and for all. They are infinitely tough and elegantly maneuverable.

Here’s why. Graphene, in its most basic form, is a sheet of pure carbon just a single atom thick. It is half the density of aluminum, which means it’s light and extremely elastic.

Vittoria uses graphene that is two to eight atoms thick—nearly invisible. By adding it to mountain-bike tires, Vittoria achieves the dream combo of characteristics—light, sticky, and tough—that rubber can’t on its own. In fact, adding graphene to rubber creates a tire that changes depending on how you ride. When riding on a straightaway, the tire stays relatively hard. When braking or cornering, it softens.
Vittoria uses graphene that is two to eight atoms thick–nearly invisible.

Examining how graphene responds to accelerating and turning corners, Vittoria constructed them in a way that allows the top and bottom layers to

How To Change The World With Curiosity

The incoming president of the National Academy of Sciences, Marcia McNutt, arrives at the U.S. advisory agency at a crucial moment. Technology is rapidly changing the nature of research, and science education needs to keep pace. Luckily the marine geophysicist, whose term begins in July, has spent her career collecting data, tracking environments, and creating models of our changing world. She’s pro-curiosity and bullish about the impact that big data could have on scientific discovery.
In Her Own Words:

“Scientists have always been explorers—they’ve had to go out and brave the world to make new discoveries. Darwin found unusual finches, and that led to the theory of evolution. But with advances in computing power, scientists won’t necessarily have to leave their homes anymore.

Future exploration will start with algorithms. Explorers can come up with crazy theories, use computers to sift through our wealth of data, and pull the signal out of the noise, answering the questions: ‘What am I looking for here? What is important?’

By 2023, a single computer’s processing power will equal that of the entire human race. If you apply that processing power to all the information out there, imagine the discoveries we will make.
“Children love discovery. We should

This Plastic Can Repair Itself

Forty years ago, plastic surpassed steel as the most widely used material in the world. Sure, the affordable and malleable polymers have brought plenty of convenience to modern life (Tupperware! Teflon! Velcro!) as well as taking on more-vital roles, in airplanes, cars, and smartphones. There’s a catch, however: Unlike many of the metals it replaces, plastic is really hard to fix; even invisible fractures can compromise its strength. A new class of smart plastics can heal breaches all on their own, to mend cracked phone screens or stitch up airplane wings.

Nancy Sottos helped pioneer this field in the ’90s. Her team at the University of Illinois at Urbana-Champaign has developed composites that can repair themselves using a range of methods.

One extends the life span of ships, bridges, and windmills by repairing plastic coatings on metal structures. “Scratches compromise traditional coatings,” Sottos says, which can lead to rust. An early self-healing plastic, now sold by spinoff company Autonomic Materials, has microcapsules embedded throughout it. When the plastic cracks, the capsules burst, releasing resin and a catalyst, which react to fill the crack.
“What if you had a material that didn’t age?”

For things made entirely of composites, like car bumpers or airplane

How Machine Learning Works

In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions.

Keep scrolling. Using a data set about homes, we will create a machine learning model to distinguish homes in New York from homes in San Francisco.


First, some intuition

Let’s say you had to determine whether a home is in San Francisco or in New York. In machine learning terms, categorizing data points is a classification task.

Since San Francisco is relatively hilly, the elevation of a home may be a good way to distinguish the two cities.

Based on the home-elevation data to the right, you could argue that a home above 240 ft should be classified as one in San Francisco.

Adding nuance

Adding another dimension allows for more nuance. For example, New York apartments can be extremely expensive per square foot.

So visualizing elevation and price per square foot in a scatterplot helps us distinguish lower-elevation homes.

The data suggests that, among homes at or below 240 ft, those that cost more than $1776 per square foot are in New York City.

Dimensions in a data set are called features, predictors, or

Feed Your 3D Printer Recycled Plastic

If you want to 3D-print all of your ideas, you’ll need to start with a hefty supply of plastic filament—the “ink” used by 3D printers. Or you could get a ProtoCycler and make your own.

In 2013, Dennon Oosterman and his former classmates Alex Kay and David Joyce grew tired of churning through expensive filament. So they built a machine that could recycle it back into usable form. The $700 ProtoCycler grinds scrap plastic—such as empty bottles and rejected 3D-printed models—into digestible pieces, melts it down, extrudes it, and winds it onto a spool. To ensure consistency, a computer-controlled diameter-feedback system uses two cameras to accurately measure the width of the filament.

There are a few drawbacks to recycled plastic. Although the Proto­Cycler’s filament starts out with about the same strength as the standard stuff, it gets weaker with each reuse. Reprinting your model too many times might make it brittle and frail. Multicolored plastic also blends like paint. This isn’t a problem at first, but eventually your filament will end up brown. None of that is a big deal if you’re using the printer to rapidly prototype designs, when quantity matters more than quality. “Anyone can 3D-print something over and over

Facebook’s Enormous Internet Drone Is Almost Ready For Primetime

Despite recent setbacks in India, Facebook CEO Mark Zuckerberg is still pushing on his goal to connect the world through his philanthropic organization, Internet.org. A large part of that vision rests on a drone infrastructure to fly over remote locations and beam internet access down to the ground.

These drones, called Aquila (plural being Aquilas?), will fly in tight loops around areas with little or no access to internet, according to Yael Maguire, head of Facebook’s Connectivity Lab.

Today, Zuckerberg wrote that Facebook has been flying prototypes of its drones every week, and the company is in the process of building a full-scale aircraft for a larger test.

Mark Zuckerberg

Facebook is working to build its first full-scale model of Aquila.

The full wingspan of the aircraft will be 139 feet, Zuckerberg says, but the center pod (which presumably contains most of the internal components) is only 10.8 feet wide. He says Aquila will be able to use solar panels on its wings to stay in the air for 3-6 months, and should work in all weather condition.

And don’t worry, the drone will still have lasers. Zuckerberg seems very excited about the opportunity to put lasers on his next-gen aircraft, denoting them with two (!!)

Here’s How Facebook & Oculus Are Bringing Faster, High-Quality VR Video

There are many uses for virtual reality. Along with gaming, immersive tutorials, and… adult activities (ahem), 360-degree videos are a large reason to get involved in VR. Cameras that record the world around you are trending, with LG and Samsung having gotten into the mix, and VR offers users the chance to view as if they’re actually there.

But when it comes to streaming 360-degree videos online, file sizes aren’t small. Facebook’s investment in Oculus gives them a strong reason to change that.

Facebook as a whole has always been open about discussing the behind-the-scenes details of their 360-degree video tech. At MWC 2016 in Barcelona, Max Cohen—Oculus’s Vice President and Head Of Mobile—discussed with Popular Science other uses for VR besides gaming, and 360-degree video, using Facebook’s dynamic streaming, is a huge part of that.

“Usually the 360-degree videos you see [in Gear VR] are a 4K video,” Cohen tells us. “4K quality is spread all around you but you only see a segment of that at one time. What dynamic streaming does is chop up that video into lots of little pyramids and then show you the highest resolution wherever you’re looking at and a lower resolution wherever you’re not.”

“The benefit

Artists Covertly 3D Scanned Nefertiti’s Bust And Released It Online

Nefertiti belongs to the ages. Does her likeness belong to the people? That’s a question for philosophers and museum curators to debate, but thanks to an enterprising group of secretive scanners, it has a practical answer: yes. Using Microsoft Kinect scanners hidden under scarves, Nora Al-Badri, a German-Iraqi artist, and Jan Nikolai Nelles, a German artist, recorded the bust of Nefertiti at the Neues Museum in Berlin last year. That bust is now available as a 3D rendering that people can download and 3D print.

Queen Nefertiti ruled Egypt with her husband, the pharaoh Akhenaten, during Egypt’s 18th dynasty, around 1350 BC to 1334 BC. Archaeologists search for her tomb to this day. The bust of Nefertiti was found in an ancient sculptor’s workshop in Egypt in 1912, and taken to Germany, where it has resided ever since. Here’s what the scan of the bust looks like:

Since the bust’s discovery, it’s become both a symbol and figurehead of tensions in preservation culture and museums. Standards for archaeological work have changed a lot over the past century, and the once-fashionable idea that artifacts needed to be in European museums for safekeeping is now met with calls for returning art and artifacts to

Facebook’s Long, Psychological Journey To Reactions

Since at least 2012, Facebook has been searching for ways to make its service more expressive. In the minds of product developers and managers at the social media company, there shouldn’t be an emotion that you can’t express on Facebook. The group’s quest brought them all the way back to Charles Darwin, with pitstops at Berkeley University and Pixar Labs.

The Reactions rolled out today are meant to be a quick way to express a flash of emotion, like amazement, a laugh, anger, a pang of sadness, or the feeling of love. Facebook has taken these emotions and assigned them animated emoticons with bits of text: Love, Haha, Wow, Sad, and Angry. These don’t replace the Like button, but are meant to be used alongside it.

Facebook’s new Reactions make the Like more robust.

Really like something? Pop a Love on it. See an awesome photo of a double rainbow? Drop some Wows.

This is huge. The Like button is iconic and monolithic, and the last remaining binary (on or off) feature that Facebook has. Businesses hang “Like Us On Facebook” on their doors, and the Like has pretty much been copied by every other

Google Will Draw Your Selfie Using An Algorithm

Google may not have announced anything at this year’s Mobile World Congress 2016, but that hasn’t stopped Android from taking part in the festivities. The first letter in Google’s Alphabet took to MWC to tout Android Experiments—a group of open source projects that run on the search company’s mobile platform. Among these projects: your face. Well, sort of.

The selfie machine, officially called the Ioio (pronounced yoyo) Plotter, uses an algorithm to plot your photograph onto paper. Like all Android Experiment projects, the developer, Ytai Ben-Tsvi, has made his code available on Google’s site. You may not be able to buy a selfie-drawing machine, but you can certainly make one.

“All it takes is Ytai’s code and some experience with physical computing,” Google’s Isaac Blankensmith tells us. “If you’ve used an arduino, you can do it.”

The algorithm chooses the darkest point [of the photo],” says Blankensmith, who works as a designer part of Google’s Creative Labs. “And then within a radius, it looks at 200 other random points. Of those, whichever is darkest is where it draws the next line to. And it repeats that over and over again, until it draws one continuous vector line drawing.”

The setup consists of a phone,

Watch Google’s Humanoid Robot Learn The World Is A Harsh Place

Atlas is a robot built to save humanity. Or at least humans. Created by Boston Dynamics, the robotics wing of Google’s parent company Alphabet, Atlas began life as a DARPA rescue robot project. With a requirement that it be roughly human-shaped, much of the work on Atlas is about balance, with some early stumbles. Now that Atlas is much better at walking, it’s moving on to simple tasks, like sweeping a warehouse. Last night, Boston Dynamics released video of a newer, smaller version of Atlas venturing into the world.

This new, smaller Atlas is just 5’9” and 180 lbs. It made friends in the wild.

Those contacts led quickly to a job at a warehouse.

Not everyone at the warehouse was happy to share a workplace with a robot.

Some people were, in fact, very unhappy.

Fortunately, Atlas was able to pick itself up and walk into a less-hostile environment.

It is more morally uncomfortable for the people watching what’s happening to the human-shaped machine than it is for the machine itself. And though the video looks like an anti-robot cruelty ad, the purpose of bullying Atlas was scientific. For the robot to function in a human environment, it needs to work through some fairly common

At Last, Space Brewer Lets Astronauts Make Real Coffee In A Cup

Of sacred morning routines, few stack up to the ritual of rolling out of bed and sipping that first sweet, sweet cup of steaming coffee. Unless you’re an astronaut on the International Space Station. One of the small yet significant personal sacrifices they make for space is: there’s no steaming mug up there.

Technically, there is coffee on board the station, but it’s made by squeezing hot water into a pouch of custom blended, freeze-dried coffee, and sipped through a straw. It only barely clears the bar for being a comforting drink, and no savory coffee smell makes it through that bag, either.

But thanks to a new version of a specially designed cup sent to ISS last year, astronauts now have the ability to brew a fresh cup of coffee.

Astronaut Kjell Lindgren demonstrated space’s first “pour-over-style” coffee in a video uploaded Friday by NASA. Drew Wollman, a materials and mechanical engineer at Portland State University and IRPI, LLC, along with IRPI senior scientist Mark Weislogel, designed and produced the brewer in only one week last April after he met up with Lindgren in Houston and learned how to use the space cup. Lindgren inquired whether there was any way to brew

Fleshy Dog Hates Robot Dog

“Stand back, interloper!” the dog bellows with every bark, if we are willing to accept creative interpretation of what the dog is saying. “Begone, you iron mockery of my existence!” it yelps and howls as the BigDog robot, made by Google-owned Boston Dynamics, stares it down in a parking lot. “What cruel world spawned this monstrosity?”

BigDog does not react, its mute body and spinning cyclopean eye simply taking in the world as raw emotionless data. BigDog’s human operator happily steers the contraption away from its would-be assailant.

This is hardly the first time animals have reacted poorly to trespassing machines. The internet is full of videographic evidence of animal-on-robot violence, and some animals now are specifically trained to attack unmanned aerial intruders. A study in bears found that, while they appeared outwardly calm when pestered by drones, their heart rates and stress levels spiked.

It is no surprise, then, that a real dog would hate a robot dog. And if the Terminator universe is any guideline for the future (it isn’t), this canine reaction to K9s will save lives.

Director Of FBI Addresses Congress About San Bernardino iPhone

The Apple-versus-FBI debate continued today in front of Congress. The FBI retrieved an iPhone belonging to one of the shooters involved in the San Bernardino, California shooting. The FBI believes that information crucial to the case can be found on the smartphone, but, without the device’s passcode, is unable to access it. With Apple refusing to provide a new version of their software that allows the FBI to hack the iPhone more easily, matters have been taken to Washington.

The director of the FBI, James Comey, presented his case to Congress as to why he and his team deserved to get into that phone with Apple’s help. Along with lengthy banter about William and Mary College, Comey explained why using the All Writs Act from 1789 was a viable way to get Apple to open up the phone.

In a court decision yesterday, New York’s Judge James Orenstein sided with the tech company. Now Congress seems to have Apple’s back as well.

Many are concerned with the matter of precedent in this case. As representative Bob Goodlatte and other house members mentioned, this could set a bad precedent for any case involving a device with encryption. To which Director Comey agreed that yes,

People Trust Robots To Lead Them Out Of Danger, Even When They Shouldn’t

Researchers from Georgia Tech Research Institute decided to see whether people would accept the authority of a robot in an emergency situation. For the most part, people did, even when placed in an emergency situation, giving the team results that might as well have been dreamt up by writers of The Office.

The team asked over 40 volunteers to individually follow a robot labeled “Emergency Guide Robot”. The researchers had the robot (which was controlled remotely by the scientists) lead them to a conference room, but in a few of the cases, the robot first led the test subjects into the wrong room first, where it travelled in circles. In others, the robot stopped and participants were told it had broken. After getting the volunteers into the conference room, the researchers filled the hallway with smoke, and set off a smoke alarm, placing the untrustworthy robot outside the door.

“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette, an engineer who conducted the study. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We