Tech Companies Step Up Fight Against 'Deepfakes'


2019/11/22 20:55  道琼斯

DJ Tech Companies Step Up Fight Against 'Deepfakes'

By Betsy Morris

Big tech companies such as Facebook Inc., Twitter Inc. and Google have benefited from enabling users to easily share pictures or videos, but they are now working to stem the spread of maliciously doctored content ahead of next year's presidential election.

So-called deepfakes are images or videos that have been manipulated through the use of sophisticated machine-learning algorithms to make it almost impossible to differentiate between what's real and what isn't. While the technology has positive applications -- Walt Disney Co. has used algorithms to insert characters in some of its "Star Wars" movies -- it also has been used to create more nefarious content.

The tools to create the fake images are improving so rapidly that "very soon it will be very hard to detect deepfakes with technology," said Dana Rao, executive vice president and general counsel for Adobe Inc., the San Francisco company best known for Photoshop image-editing software. The fight "will be an arms race," he said.

The number of deepfakes online nearly doubled from December to August, to 14,678, according to a study by cybersecurity startup Deeptrace. The rise has prompted action by tech giants.

Alphabet Inc.'s Google on Wednesday, in an update to its political advertisement policy, said it was prohibiting the use of deepfakes in those and other ads.

Twitter earlier this month said it was considering identifying manipulated photos, videos and audio shared on its platform. "The risk is that these types of synthetic media and disinformation undermine the public trust and erode our ability to have productive conversations about critical issues," said Yoel Roth, Twitter's head of site integrity.

Facebook, Microsoft Corp. and Inc. are working with more than a half-dozen universities to run a Deepfake Detection Challenge starting next month. It is intended to accelerate research into new ways of detecting and preventing media manipulated to mislead others, Facebook Chief Technology Officer Mike Schroepfer wrote in a blog post in September.

Interest in making deepfakes is growing fast, according to Deeptrace. Two years ago the first deepfakes appeared on Reddit, the popular chat forum. Now at least 20 websites and online forums are devoted to discussions about how to better produce them.

Deeptrace found online services that can generate and sell custom deepfakes in as little as two days and for a cost as low as $2.99 a video, the researchers said.

"It doesn't take a lot of skill," said Matt Turek, a program manager overseeing deepfake-related research and development efforts at the Pentagon's technology incubator, the Defense Advanced Research Projects Agency. The Pentagon is studying deepfakes out of concern that military planners could be fooled into bad decisions if altered images aren't detected.

Darpa has developed a prototype media forensics tool for use by government agencies to detect altered photos and video. It wants to develop additional technology to detect synthetic audio and fake text and identify the source and intent of any manipulated content.

How companies deal with deepfakes is another point of conflict between tech companies and Washington. House Speaker Nancy Pelosi (D., Calif.) denounced Facebook earlier this year for its refusal to take down a doctored video of her. Facebook Chief Executive Mark Zuckerberg, following the incident, said the company was reviewing its policy on deepfakes.

Republican Sen. Jerry Moran of Kansas has co-sponsored legislation with Sen. Catherine Cortez Masto (D., Nev.) to boost research to identify such content manipulation. On Wednesday, Mr. Moran called deepfakes "a specific threat to U.S. voters and consumers by way of misinformation that is increasingly difficult to identify."

Several startups have emerged to work on image verification, and now big tech companies, which have been criticized for not doing more to prevent disinformation, are getting more involved in the fight.

For example, Facebook has amassed more than 100,000 videos featuring actors -- not images drawn from the social media site's actual users -- that researchers can use to help develop and test systems to spot deepfakes.

Google has similarly built up a catalog to hone deepfake-detection research. This year the company assembled a trove of audio clips to help researchers develop ways to identify fake speech that can be spliced into a video. Google also is drawing on its work developing text-to-speech conversion tools and to devise new ones that can help authenticate a speaker.

Adobe is taking a different approach. The company has developed a system that will allow authors and publishers to attach information to content, such as who created it and when and where. It is working with New York Times Co. and Twitter and will share the technology, which it says it will aim to make an industrywide system for authenticating content.

Adobe said it would introduce the authentication tool on its Photoshop editing software as an opt-in feature in the next several months. Adobe expects most legitimate authors and creators to opt in, while bad actors wouldn't, Mr. Rao said.

Elsewhere, the nonprofit arm of the AI Foundation, an advocacy group for the safe use of artificial intelligence, has created a website to help campaigns and journalists analyze photos and videos within minutes of receiving them. The portal, called Reality Defender 2020, uses complex algorithms to detect pixel changes and other anomalies, such as in a candidate's mannerisms, mouth movements, face wrinkles and shadows, to detect alterations. It has drawn on research from dozens of academics.

"There is no one silver bullet," said the AI Foundation's founder and chief technology officer, Rob Meadows.

--Till Daldrup contributed to this article.

Write to Betsy Morris at

(END) Dow Jones Newswires

November 22, 2019 07:55 ET (12:55 GMT)

Copyright (c) 2019 Dow Jones & Company, Inc.


贝齐·莫里斯(Betsy Morris)著


所谓的“深度伪装”是指通过使用复杂的机器学习算法来操纵的图像或视频,几乎不可能区分什么是真实的,什么是不真实的。虽然这项技术有积极的应用--华特迪士尼公司(Walt Disney Co.)已经使用算法在其一些“星球大战”电影中插入人物,但它也被用来创造更邪恶的内容。

创造假图像的工具改进得如此之快,以至于“很快就很难用技术来检测深度伪造,”Adobe Inc.的执行副总裁兼总法律顾问达纳·拉奥(Dana Rao)说。Adobe Inc.是旧金山一家以Photoshop图像编辑软件闻名的公司。他说,这场战斗“将是一场军备竞赛”。


字母表公司(Alphabet Inc.)旗下的谷歌(Google)周三在其政治广告政策的更新中表示,将禁止在这些广告和其他广告中使用深度假货。

Twitter本月早些时候表示,正在考虑识别其平台上共享的被操纵的照片、视频和音频。Twitter网站完整性主管约埃尔·罗斯(Yoel Roth)表示:“风险在于,这些类型的合成媒体和虚假信息会破坏公众信任,并侵蚀我们就关键问题进行富有成效的对话的能力。”

Facebook,微软公司和亚马逊公司。我们正与超过六所大学合作,从下个月开始举办一项防伪探测挑战赛。Facebook首席技术官迈克·施罗普费尔(Mike Schroepfer)在9月份的一篇博文中写道,此举旨在加快对检测和防止媒体被操纵以误导他人的新方法的研究。



“这不需要很多技能,”马特·图雷克(Matt Turek)说,他是五角大楼技术孵化器--国防高级研究计划局(Defense Advanced Research Projects Agency)的项目经理,负责与深度假冒相关的研发工作。五角大楼正在研究深层假货,担心如果没有检测到篡改的图像,军事规划者可能会被愚弄成错误的决定。


公司如何处理深层假货是科技公司和华盛顿之间的另一个冲突点。众议院议长南希·佩洛西(D.,加利福尼亚州)今年早些时候谴责Facebook拒绝删除她被篡改的视频。Facebook首席执行官马克·扎克伯格(Mark Zuckerberg)在事件发生后表示,该公司正在审查其关于深度假货的政策。

堪萨斯州共和党参议员杰里·莫兰(Jerry Moran)与内华达州民主党参议员凯瑟琳·科尔特兹·马斯托(Catherine Cortez Masto)共同发起了立法。以促进识别这种内容操纵的研究。周三,莫兰称深度假货“通过越来越难以识别的错误信息对美国选民和消费者构成具体威胁”。




Adobe采取了一种不同的方法。该公司已经开发了一个系统,允许作者和出版商将信息附加到内容上,例如谁创建了它,何时何地。该公司正在与纽约时报公司(New York Times Co.)和Twitter合作,并将分享这项技术,该公司表示,该技术的目标是打造一个全行业的内容认证系统。


在其他地方,人工智能基金会(AI Foundation)的非营利性分支机构人工智能基金会(AI Foundation)创建了一个网站,帮助竞选活动和记者在收到照片和视频后的几分钟内对它们进行分析。人工智能基金会是一个提倡安全使用人工智能的组织。这个门户网站名为Reality Defender 2020,它使用复杂的算法来检测像素变化和其他异常,例如候选人的举止、嘴巴运动、面部皱纹和阴影,以检测变化。它借鉴了数十位学者的研究成果。

AI基金会的创始人兼首席技术官Rob Meadows说:“没有一颗银弹。”


写信给Betsy Morris电子邮件



版权所有(C)2019Dow Jones&Company,Inc.