He called the software Waymo’s “secret sauce.”
Waymo and Uber spent only four days at trial last week before settling, with Uber agreeing to provide Waymo 0.34 percent of its stock, worth about $245 million. The dispute between the companies started in 2016 when Uber bought Otto, a start-up founded by Anthony Levandowski, an early member of Google’s self-driving car program. Waymo, which was spun out of Google, accused Mr. Levandowski of stealing technology before leaving and accused Uber of using the misappropriated knowledge.
“This was a really special case with a really special set of circumstances,” Mr. Krafcik said. “For us, this was always about, and really just about, the fact that we needed to ensure Uber wasn’t using our trade secrets.” He added that he did not foresee Waymo suing other former employees.
Mr. Krafcik also discussed how Waymo was looking to start a ride-hailing service, which it is testing in Phoenix with thousands of driverless Pacifica minivans.
“We have a plan to move from city to city,” he said. “We’re not going to be launching with a 25 mile-per-hour product. We’re talking about a full-speed service that will serve a very large geographic area with essentially unlimited pickup and drop-off points.” — Nellie Bowles
No, Amazon isn’t using A.I. to cut jobs
Jeff Wilke, Amazon’s chief executive of its consumer business, which includes its e-commerce operations, doesn’t often make public appearances. But on Monday night, he joined the New Work Summit to discuss the internet retailer’s move into artificial intelligence.
His key message: A.I. is everywhere, but that doesn’t mean it will take our jobs.
“If you look at the evolution of technology over the course of decades, tech doesn’t eliminate work; it changes work,” Mr. Wilke said.
He said that over the last five years, since Amazon bought a robot maker called Kiva Systems, it had built 100,000 of the robots — and also hired 300,000 people. “We still need human judgment,” he said.
Amazon has also embedded A.I. throughout the company, he added, with technologists working together with people who run businesses. The company is using machine learning and deep learnings, which are different flavors of A.I., to upgrade internal algorithms, he said.
As to how Amazon might use A.I. at Whole Foods, the grocery store chain that it said it would acquire last year, Mr. Wilke said little. When asked whether Amazon would integrate its cashier-less and A.I.-driven convenience store concept, called Amazon Go, with Whole Foods, he said, “I don’t foresee the format of Whole Foods changing very much.” — Pui-Wing Tam
A.I. has become a campaign issue
As A.I. technology barrels ahead in Silicon Valley, it’s also starting to pick up steam as a political issue in Washington.
Over the weekend, I wrote about Andrew Yang, a former tech executive who has decided to run for president in 2020 as a Democrat on a “beware the robots” platform. He thinks that with innovations like self-driving cars and grocery stores without cashiers just around the corner, we’re about to move into a frightening new era of mass unemployment and social unrest.
So he’s proposing a universal basic income plan called the “Freedom Dividend,” which would give every American adult $1,000 a month to guarantee them a minimum standard of living while they retrain themselves for new kinds of work.
Mr. Yang’s campaign is a long shot, and there are significant hurdles to making universal basic income politically feasible. But the conversation about automation’s social and economic consequences is long overdue. Even if he doesn’t win the election, Mr. Yang may have hit on the next big political wedge issue. — Kevin Roose
Artificial intelligence may be biased
In modern artificial intelligence, data rules. A.I. software is only as smart as the data used to train it, as Steve Lohr recently wrote, and that means that some of the biases in the real world can seep into A.I.
If there are many more white men than black women in the system, for example, it will be worse at identifying the black women. That appears to be the case with some popular commercial facial recognition software.
Joy Buolamwini, a researcher at the M.I.T. Media Lab, found that the software can now tell if a white man in a photograph is male or female 99 percent of the time. But for darker skinned women, it is wrong nearly 35 percent of the time. — Joseph Plambeck
Continue reading the main story