All too often, technologists don't get to build artificial intelligence models the right way, based on a careful weighing of the pros and cons, AI researcher Timnit Gebru told Tom Simonite during a RE:WIRED conversation on Tuesday. In the rush to publish new research and push out ever larger models, "we haven't had the time to think about how it should even be built because we're always just putting out fires," she said. As Simonite recounted in a WIRED cover story in June, until a year ago, Gebru was a researcher at Google and co-led a team dedicated to ethics in AI. She says she was forced out in a dispute over a research paper that detailed the ways things can go awry with large language models, which power services like machine translation and Google search. The experience left her convinced that the incentives behind current AI research are all wrong, meant to "help the Defense Department figure out how to kill more people more efficiently" or make more money for multinational corporations. Gebru plans to open an independent interdisciplinary institute for AI ethics and accountability on December 2—one year to the day from her ouster from Google. "The hope is that instead of just constantly critiquing technology after the fact … we can also maybe model a positive example for how we should do AI research," she said. Watch the discussion with Timnit Gebru. |
No comments:
Post a Comment