PLUS: The latest AI and tech news.
By Jennifer Conrad | 01.31.22 Good morning! Tonight marks the start of Lunar New Year celebrations for many around the world. I'll be eating dumplings and taking selfies for the IRS. | Eric McCarter remembers the first time he operated a forklift truck in France—while sitting at a desk in California. As a tester for Phantom Auto, he sat behind a steering wheel and pedals that transmitted commands to the forklift thousands of miles away; large screens offered views in front, behind, and to the sides of the vehicle. As Will Knight reports, the vehicle relies on limited artificial intelligence to avoid obstacles and safely come to a stop if the connection between France and the US were to fail. But the AI isn't yet clever enough to let the forklift navigate on its own through an unfamiliar warehouse or take on a new task. Another company, Cobalt Robotics, makes robotic security guards that navigate a building autonomously but require human help when they get stuck. Those human operators can also speak through a microphone if the robot comes across a stranger. In other cases, self-driving trucks can be taken over by a remote driver if the vehicle has trouble navigating in bad weather. A Far-Away Future Many office workers have gone remote during the pandemic. The continued challenges posed by the virus and a deepening labor shortage—combined with advances in technologies such as AI and virtual reality—are allowing a small but growing number of physical jobs to go remote as well. Read about the latest jobs to go remote. | In November, the US Internal Revenue Service launched an online security system that uses face recognition to confirm a person's identity for certain services. Public attention triggered an outcry last week, in part because of concerns face recognition can be less accurate for people of color, Tom Simonite reports. Some IRS functions, like scheduling payments—but not filing taxes—now require first-time users to verify their identity with Virginia startup ID.me, which also works with 27 state employment agencies. The US Treasury Department is reportedly looking into alternatives to ID.me's system, but submitting selfies to access online government services will likely only become more common—it's required by US federal security guidelines from 2017 that aim to prevent fraud. Making such a system universally accessible poses a challenge. An agency like the IRS has to serve a user base similar in scale to that of a large tech company, but unlike a hot startup it must also include society's least connected. The US government's track record on digital inclusion is mixed: ID.me says it has 650 locations where people can complete enrollment in person—a small number in a big country. New Tech, New Concerns Caitlin Seeley George, of the nonprofit Fight for the Future, says ID.me uses the specter of fraud to sell technology that locks out vulnerable people and creates a stockpile of highly sensitive data that itself will be targeted by criminals. "A tool that creates more problems can't be hailed as a solution," she says. Read about US government efforts to use facial recognition to verify identities. | When a natural language processing system encounters a word—suit—that could denote multiple things—an article of clothing or a legal action—it devotes itself to analyzing ever greater chunks of correlated information in an effort to pinpoint the word's exact meaning. That works 99.9 percent of the time, as when the program correctly concludes that the word suit is part of a judge's email to counsel. The other 0.1 percent of the time, the AI snaps, write Ideas contributors Angus Fletcher and Erik J. Larson. It misidentifies a diving suit as a lawyerly conversation, plunging into an ocean that it thinks is a courtroom. The authors suggest that instead of designing AI to prioritize resolving ambiguous data, it can be programmed to perform quick-and-dirty recalls of all possible meanings. Then the program could carry those options on to its subsequent tasks, like a human brain reading a poem with multiple potential interpretations held simultaneously in mind. In many cases, the ambiguity will resolve itself: Maybe each use of suit suggests the same meaning; maybe the user realizes they mistyped suite. Worst case, the system can pause to request human assistance. And in any case, the AI won't break itself, making unnecessary errors because it's so stressed about being perfect. AI That Bounces Back Instead of pursuing faster machine learners that crunch ever-vaster piles of data, the focus should be on making AI more tolerant of bad information, user error, and environmental factors. That AI would exchange near-perfection for consistent adequacy while sacrificing nothing essential. Read why AI needs the capacity to tolerate ambiguity. | |
No comments:
Post a Comment