By THE NEW YORK TIMES
In Phoenix, Ariz., cars are self-navigating the streets. In many homes, people are barking commands at tiny machines, with the machines responding. On our smartphones, apps can now recognize faces in photos and translate from one language to another.
Artificial intelligence is here — and it’s bringing new possibilities, while also raising questions. Do these gadgets and services really behave as advertised? How will they evolve in the years ahead? How quickly will they overhaul the way we live and change the way we do business?
The Times is exploring these matters this week at our annual New Work Summit, featuring technology executives, A.I. researchers, investors and others. Here are some of the key moments coming out of the conference, plus a rundown of some of our recent A.I. stories. — Cade Metz
Trump administration silent on A.I.
Last year, the Chinese government unveiled a plan to become the world leader in artificial intelligence by the year 2030, vowing to create a domestic industry worth $150 billion. This manifesto read like a challenge to the United States, and in many ways, it echoed policies laid down by the Obama administration in 2016.
But as China pushes ahead in this area, many experts are concerned that the Trump administration is not doing enough to keep the United States ahead in the future. Although the big United States internet giants are leading the A.I. race, these experts believe the country as a whole could fall behind if does not do more to nurture research inside universities and government labs. — Cade Metz
Waymo C.E.O. “really happy” with Uber settlement
John Krafcik, chief executive of the self-driving car company Waymo, took the stage at the New Work Summit on Monday night and spoke out for the first time since his company reached a settlement last week with Uber in a lawsuit over trade secrets that riveted Silicon Valley.
“We were really happy with the outcome that we engineered,” Mr. Krafcik said. “We spent a lot of time in that case talking about the hardware, but the extra benefit we got from that suit was the ability to understand and ensure that Uber wasn’t using any of our software.”
He called the software Waymo’s “secret sauce.”
Waymo and Uber spent only four days at trial last week before settling, with Uber agreeing to provide Waymo 0.34 percent of its stock, worth about $245 million. The dispute between the companies started in 2016 when Uber bought Otto, a start-up founded by Anthony Levandowski, an early member of Google’s self-driving car program. Waymo, which was spun out of Google, accused Mr. Levandowski of stealing technology before leaving and accused Uber of using the misappropriated knowledge.
“This was a really special case with a really special set of circumstances,” Mr. Krafcik said. “For us, this was always about, and really just about, the fact that we needed to ensure Uber wasn’t using our trade secrets.” He added that he did not foresee Waymo suing other former employees.
Mr. Krafcik also discussed how Waymo was looking to start a ride-hailing service, which it is testing in Phoenix with thousands of driverless Pacifica minivans.
“We have a plan to move from city to city,” he said. “We’re not going to be launching with a 25 mile-per-hour product. We’re talking about a full-speed service that will serve a very large geographic area with essentially unlimited pickup and drop-off points.” — Nellie Bowles
No, Amazon isn’t using A.I. to cut jobs
Jeff Wilke, Amazon’s chief executive of its consumer business, which includes its e-commerce operations, doesn’t often make public appearances. But on Monday night, he joined the New Work Summit to discuss the internet retailer’s move into artificial intelligence.
Interested in All Things Tech?
The Bits newsletter will keep you updated on the latest from Silicon Valley and the technology industry.
His key message: A.I. is everywhere, but that doesn’t mean it will take our jobs.
“If you look at the evolution of technology over the course of decades, tech doesn’t eliminate work; it changes work,” Mr. Wilke said.
He said that over the last five years, since Amazon bought a robot maker called Kiva Systems, it had built 100,000 of the robots — and also hired 300,000 people. “We still need human judgment,” he said.
Amazon has also embedded A.I. throughout the company, he added, with technologists working together with people who run businesses. The company is using machine learning and deep learnings, which are different flavors of A.I., to upgrade internal algorithms, he said.
As to how Amazon might use A.I. at Whole Foods, the grocery store chain that it said it would acquire last year, Mr. Wilke said little. When asked whether Amazon would integrate its cashier-less and A.I.-driven convenience store concept, called Amazon Go, with Whole Foods, he said, “I don’t foresee the format of Whole Foods changing very much.” — Pui-Wing Tam
A.I. has become a campaign issue
As A.I. technology barrels ahead in Silicon Valley, it’s also starting to pick up steam as a political issue in Washington.
Over the weekend, I wrote about Andrew Yang, a former tech executive who has decided to run for president in 2020 as a Democrat on a “beware the robots” platform. He thinks that with innovations like self-driving cars and grocery stores without cashiers just around the corner, we’re about to move into a frightening new era of mass unemployment and social unrest.
So he’s proposing a universal basic income plan called the “Freedom Dividend,” which would give every American adult $1,000 a month to guarantee them a minimum standard of living while they retrain themselves for new kinds of work.
Mr. Yang’s campaign is a long shot, and there are significant hurdles to making universal basic income politically feasible. But the conversation about automation’s social and economic consequences is long overdue. Even if he doesn’t win the election, Mr. Yang may have hit on the next big political wedge issue. — Kevin Roose
Artificial intelligence may be biased
In modern artificial intelligence, data rules. A.I. software is only as smart as the data used to train it, as Steve Lohr recently wrote, and that means that some of the biases in the real world can seep into A.I.
If there are many more white men than black women in the system, for example, it will be worse at identifying the black women. That appears to be the case with some popular commercial facial recognition software.
Joy Buolamwini, a researcher at the M.I.T. Media Lab, found that the software can now tell if a white man in a photograph is male or female 99 percent of the time. But for darker skinned women, it is wrong nearly 35 percent of the time. — Joseph Plambeck