One study found that most people, despite knowing that the content they were shown had been digitally modified, were unable to tell if they were watching ‘deep fake’ videos.
The term ‘deep fake’ is used for videos that use software and deep learning (algorithms used to teach computers) on a person’s face to say what he said. happens.
Notable examples include Richard Nixon’s Apollo 11 presidential address and a video of former US President Barack Obama insulting Donald Trump. Some researchers say the illegal use of technology could make it the most dangerous crime in the future.
Researchers from Oxford University, Brown University and the Royal Society showed one group of volunteers participating in the first experiment five unaltered videos, while the other group saw four unaltered videos and a deepfake video. Viewers were asked to identify which of these videos were fake or false.
Researchers used Tom Cruise’s videos, created by VFX (visual effects) artist Chris Om, to show the American actor magic tricks and jokes about Russian politician Mikhail Gorbachev in videos uploaded to Tik Tak. Was shown reciting.
Of the participants who were warned in advance, 20% identified the deep-fake video, while 10% of those who were not informed were able to recognize it.
But surprisingly, despite the direct warning, more than 78% of the people could not differentiate between deep fake and authentic content.
The researchers wrote in a pre-research paper: “Compared to a controlled group that only watches authentic videos, people who watch general content are not able to notice anything when they come across deep fake videos.” ۔ ‘
This section includes related reference points (Related Nodes field)
It is expected that this article will be published in a few months and experts will review it.
Despite participants’ familiarity with Tom Cruise, their gender, the quality of their use of social media, or their ability to detect altered video, they all made the same mistakes.
Researchers have found that the ability to detect deep fakes is related to age. The researchers found that older participants were better able to identify the deep fakes.
Researchers have predicted that the difficulty of detecting real video from the usual (fake) fake videos could be “dangerous in reducing the value of video media information.”
“Because people are so affected by DeepFake’s deception that they rationally stop trusting all online videos, including authentic content,” the dialogue reads.
If this continues in the future, people will have to rely on warning labels and content moderation on social media to ensure that misleading videos and other misinformation on platforms do not become commonplace.
The article states that Twitter, Facebook and other platforms routinely rely on their moderators to flag content from regular users.
This is a task that can be difficult if people are unable to separate misinformation and authentic content.
Facebook in particular has been repeatedly criticized in the past for failing to provide moderate support to its content moderators and to remove inappropriate content.
Research from New York University and the Université Grenoble Alpes in France found that from August 2020 to January 2021, the content of those who spread false information received six times more likes, shares and interactions than accurate news.
Facebook claims that such research does not show the whole picture because the engagements that Pages receives should not lead to the ambiguity that people have seen them on Facebook.
Researchers have also expressed concern that such warnings could be considered politically motivated or biased, as evidenced by conspiracy theories about the corona vaccine or the labeling of Twitter on former President Trump’s tweets. ۔ ‘
The deep-faked video of President Obama calling then-President Trump an irrational person was considered correct by 15% of people in a 2020 study, even though the material itself was highly questionable.
Researchers have warned that more general distrust of online information could be the result of both deep fakes and content warnings, and that policymakers should take this into account when evaluating the pros and cons of moderating online content. Should keep