Tech Companies Step Up Fight Against 'Deepfakes'

科技公司加大对“深度假货”的打击力度

2019/11/22 20:55  道琼斯

DJ Tech Companies Step Up Fight Against 'Deepfakes'


By Betsy Morris

Big tech companies such as Facebook Inc., Twitter Inc. and Google have benefited from enabling users to easily share pictures or videos, but they are now working to stem the spread of maliciously doctored content ahead of next year's presidential election.

So-called deepfakes are images or videos that have been manipulated through the use of sophisticated machine-learning algorithms to make it almost impossible to differentiate between what's real and what isn't. While the technology has positive applications -- Walt Disney Co. has used algorithms to insert characters in some of its "Star Wars" movies -- it also has been used to create more nefarious content.

The tools to create the fake images are improving so rapidly that "very soon it will be very hard to detect deepfakes with technology," said Dana Rao, executive vice president and general counsel for Adobe Inc., the San Francisco company best known for Photoshop image-editing software. The fight "will be an arms race," he said.

The number of deepfakes online nearly doubled from December to August, to 14,678, according to a study by cybersecurity startup Deeptrace. The rise has prompted action by tech giants.

Alphabet Inc.'s Google on Wednesday, in an update to its political advertisement policy, said it was prohibiting the use of deepfakes in those and other ads.

Twitter earlier this month said it was considering identifying manipulated photos, videos and audio shared on its platform. "The risk is that these types of synthetic media and disinformation undermine the public trust and erode our ability to have productive conversations about critical issues," said Yoel Roth, Twitter's head of site integrity.

Facebook, Microsoft Corp. and Amazon.com Inc. are working with more than a half-dozen universities to run a Deepfake Detection Challenge starting next month. It is intended to accelerate research into new ways of detecting and preventing media manipulated to mislead others, Facebook Chief Technology Officer Mike Schroepfer wrote in a blog post in September.

Interest in making deepfakes is growing fast, according to Deeptrace. Two years ago the first deepfakes appeared on Reddit, the popular chat forum. Now at least 20 websites and online forums are devoted to discussions about how to better produce them.

Deeptrace found online services that can generate and sell custom deepfakes in as little as two days and for a cost as low as $2.99 a video, the researchers said.

"It doesn't take a lot of skill," said Matt Turek, a program manager overseeing deepfake-related research and development efforts at the Pentagon's technology incubator, the Defense Advanced Research Projects Agency. The Pentagon is studying deepfakes out of concern that military planners could be fooled into bad decisions if altered images aren't detected.

Darpa has developed a prototype media forensics tool for use by government agencies to detect altered photos and video. It wants to develop additional technology to detect synthetic audio and fake text and identify the source and intent of any manipulated content.

How companies deal with deepfakes is another point of conflict between tech companies and Washington. House Speaker Nancy Pelosi (D., Calif.) denounced Facebook earlier this year for its refusal to take down a doctored video of her. Facebook Chief Executive Mark Zuckerberg, following the incident, said the company was reviewing its policy on deepfakes.

Republican Sen. Jerry Moran of Kansas has co-sponsored legislation with Sen. Catherine Cortez Masto (D., Nev.) to boost research to identify such content manipulation. On Wednesday, Mr. Moran called deepfakes "a specific threat to U.S. voters and consumers by way of misinformation that is increasingly difficult to identify."

Several startups have emerged to work on image verification, and now big tech companies, which have been criticized for not doing more to prevent disinformation, are getting more involved in the fight.

For example, Facebook has amassed more than 100,000 videos featuring actors -- not images drawn from the social media site's actual users -- that researchers can use to help develop and test systems to spot deepfakes.

Google has similarly built up a catalog to hone deepfake-detection research. This year the company assembled a trove of audio clips to help researchers develop ways to identify fake speech that can be spliced into a video. Google also is drawing on its work developing text-to-speech conversion tools and to devise new ones that can help authenticate a speaker.

Adobe is taking a different approach. The company has developed a system that will allow authors and publishers to attach information to content, such as who created it and when and where. It is working with New York Times Co. and Twitter and will share the technology, which it says it will aim to make an industrywide system for authenticating content.

Adobe said it would introduce the authentication tool on its Photoshop editing software as an opt-in feature in the next several months. Adobe expects most legitimate authors and creators to opt in, while bad actors wouldn't, Mr. Rao said.

Elsewhere, the nonprofit arm of the AI Foundation, an advocacy group for the safe use of artificial intelligence, has created a website to help campaigns and journalists analyze photos and videos within minutes of receiving them. The portal, called Reality Defender 2020, uses complex algorithms to detect pixel changes and other anomalies, such as in a candidate's mannerisms, mouth movements, face wrinkles and shadows, to detect alterations. It has drawn on research from dozens of academics.

"There is no one silver bullet," said the AI Foundation's founder and chief technology officer, Rob Meadows.

--Till Daldrup contributed to this article.

Write to Betsy Morris at betsy.morris@wsj.com



(END) Dow Jones Newswires

November 22, 2019 07:55 ET (12:55 GMT)

Copyright (c) 2019 Dow Jones & Company, Inc.

DJ科技公司加大对“深度假货”的打击力度


贝齐·莫里斯(Betsy Morris)著

大型科技公司,如Facebook公司,Twitter公司。谷歌也从允许用户轻松分享图片或视频中获益,但他们现在正在努力在明年总统选举之前阻止恶意篡改内容的传播。

所谓的“深度伪装”是指通过使用复杂的机器学习算法来操纵的图像或视频,几乎不可能区分什么是真实的,什么是不真实的。虽然这项技术有积极的应用--华特迪士尼公司(Walt Disney Co.)已经使用算法在其一些“星球大战”电影中插入人物,但它也被用来创造更邪恶的内容。

创造假图像的工具改进得如此之快,以至于“很快就很难用技术来检测深度伪造,”Adobe Inc.的执行副总裁兼总法律顾问达纳·拉奥(Dana Rao)说。Adobe Inc.是旧金山一家以Photoshop图像编辑软件闻名的公司。他说,这场战斗“将是一场军备竞赛”。

根据网络安全初创公司Deeptrace的一项研究,从去年12月到8月,在线深度假货的数量几乎翻了一番,达到14,678人。这种上涨促使科技巨头采取行动。

字母表公司(Alphabet Inc.)旗下的谷歌(Google)周三在其政治广告政策的更新中表示,将禁止在这些广告和其他广告中使用深度假货。

Twitter本月早些时候表示,正在考虑识别其平台上共享的被操纵的照片、视频和音频。Twitter网站完整性主管约埃尔·罗斯(Yoel Roth)表示:“风险在于,这些类型的合成媒体和虚假信息会破坏公众信任,并侵蚀我们就关键问题进行富有成效的对话的能力。”

Facebook,微软公司和亚马逊公司。我们正与超过六所大学合作,从下个月开始举办一项防伪探测挑战赛。Facebook首席技术官迈克·施罗普费尔(Mike Schroepfer)在9月份的一篇博文中写道,此举旨在加快对检测和防止媒体被操纵以误导他人的新方法的研究。

根据Deeptrace的说法,制作深度假货的兴趣正在迅速增长。两年前,最早的深度假货出现在流行的聊天论坛Reddit上。现在至少有20个网站和在线论坛致力于讨论如何更好地制作它们。

研究人员表示,Deeptrace发现,在线服务可以在短短两天内生成并销售定制的深度假货,成本低至每段视频2.99美元。

“这不需要很多技能,”马特·图雷克(Matt Turek)说,他是五角大楼技术孵化器--国防高级研究计划局(Defense Advanced Research Projects Agency)的项目经理,负责与深度假冒相关的研发工作。五角大楼正在研究深层假货,担心如果没有检测到篡改的图像,军事规划者可能会被愚弄成错误的决定。

DARPA已经开发了一个原型媒体取证工具,供政府机构用来检测被篡改的照片和视频。它希望开发更多的技术来检测合成音频和假文本,并识别任何被操纵内容的来源和意图。

公司如何处理深层假货是科技公司和华盛顿之间的另一个冲突点。众议院议长南希·佩洛西(D.,加利福尼亚州)今年早些时候谴责Facebook拒绝删除她被篡改的视频。Facebook首席执行官马克·扎克伯格(Mark Zuckerberg)在事件发生后表示,该公司正在审查其关于深度假货的政策。

堪萨斯州共和党参议员杰里·莫兰(Jerry Moran)与内华达州民主党参议员凯瑟琳·科尔特兹·马斯托(Catherine Cortez Masto)共同发起了立法。以促进识别这种内容操纵的研究。周三,莫兰称深度假货“通过越来越难以识别的错误信息对美国选民和消费者构成具体威胁”。

已经出现了几家初创公司,致力于图像验证,而现在,因没有采取更多措施防止虚假信息而受到批评的大型科技公司,正在更多地参与这场斗争。

例如,Facebook已经积累了超过10万个以演员为主角的视频--而不是来自社交媒体网站实际用户的图片--研究人员可以利用这些视频来帮助开发和测试系统,以发现深层假货。

谷歌也同样建立了一个目录,以完善深度假冒检测研究。今年,该公司汇集了一批音频片段,以帮助研究人员开发出识别可以拼接成视频的假语音的方法。谷歌也在利用它的工作开发文本到语音转换工具,并设计新的工具来帮助认证说话者。

Adobe采取了一种不同的方法。该公司已经开发了一个系统,允许作者和出版商将信息附加到内容上,例如谁创建了它,何时何地。该公司正在与纽约时报公司(New York Times Co.)和Twitter合作,并将分享这项技术,该公司表示,该技术的目标是打造一个全行业的内容认证系统。

Adobe表示,将在未来几个月内在其Photoshop编辑软件上引入认证工具,作为一项选择加入功能。拉奥说,Adobe预计大多数合法作者和创作者会选择加入,而坏演员不会。

在其他地方,人工智能基金会(AI Foundation)的非营利性分支机构人工智能基金会(AI Foundation)创建了一个网站,帮助竞选活动和记者在收到照片和视频后的几分钟内对它们进行分析。人工智能基金会是一个提倡安全使用人工智能的组织。这个门户网站名为Reality Defender 2020,它使用复杂的算法来检测像素变化和其他异常,例如候选人的举止、嘴巴运动、面部皱纹和阴影,以检测变化。它借鉴了数十位学者的研究成果。

AI基金会的创始人兼首席技术官Rob Meadows说:“没有一颗银弹。”

--直到Daldrup对本文有贡献。

写信给Betsy Morris电子邮件:betsy.morris@wsj.com



(完)道琼斯通讯社

2019年11月22日07:55美国东部时间(格林威治时间12:55)

版权所有(C)2019Dow Jones&Company,Inc.

免责声明:中文翻译由腾讯翻译君提供支持,富途对翻译信息的准确性或可靠性所造成的任何损失不承担任何责任。

风险提示:上文所示之作者或者嘉宾的观点,都有其特定立场,投资决策需建立在独立思考之上。富途将竭力但却不能保证以上内容之准确和可靠,亦不会承担因任何不准确或遗漏而引起的任何损失或损害。